my recent reads..

Atomic Accidents: A History of Nuclear Meltdowns and Disasters; From the Ozark Mountains to Fukushima
Power Sources and Supplies: World Class Designs
Red Storm Rising
Locked On
Analog Circuits Cookbook
The Teeth Of The Tiger
Sharpe's Gold
Without Remorse
Practical Oscillator Handbook
Red Rabbit

Friday, May 25, 2007

The top 10 dead (or dying) computer skills

I try to avoid postings that just refer you to other blogs or articles, but I've succumbed. ComputerWorld's The top 10 dead (or dying) computer skills prompted a bit of nostalgia. I scored 91% [giving myself 1% for the time I bought a book on COBOL while at uni ... and had the good sense to take it no further than that!!].

OS/2 brings back memories, of which I was also reminded when I first checked out Google's code search and found some of my 1995 OS/2 code lying around! [NB: these days, I look at this code and shudder "Eek!... buffer overflow vulnerability!!" ... security just wasn't front of mind back then! ]. But it also reminds me of how much thought I put into the decision to adopt C++ on OS/2. It very much felt like "this is a decision that I'll live with for years". But 12 years later, in 2007, that decision-making process seems so naive and foreign. Now it is routine to dabble in a couple of scripting languages, some Java, even some C++. The right (or most fun) tool for the job, right?

If I could say "Programming Language Bigotry" is a skill (some people certainly practiced and honed it like it was), then boy am I glad it seems to be a thing of the past and perhaps it deserves to be #1 in this list!

After a brief post-dot-boom hiatus, the drammatic rate of evolution is certainly back, spurred on by Web 2.0 hype. The rate of technological change has indeed become so "normal" that a top 10 list hardly scratches the surface. Personally I would have voted for int 21h. I'm sure generations to come will have absolutely no idea what that means, but for me and presumably many others, I can sum up a year of computer science with that very phrase.

For many (myself included), To Be Alive is To Be Learning and vice versa. The new religion if you will. "Lifelong learning" or "learning for life" are too trite and miss the essential truth.

Other may say that to be continuously learning is to be in a perpetual state of childhood. Look at some of the toys we are learning about and maybe they have a point!

Postscript: I just re-listened to a WebDevRadio Episode 18 which reminded me that Coldfusion is not dead!! At least according to the guys at Mach ii..

Tuesday, May 22, 2007

Monitoring log files on Windows with Grid Control

The Oracle Grid Control agent for Windows (10.2.0.2) is missing the ability to monitor arbitrary log files. This was brought up recently in the OTN Forums. The problem seems to have been identified by Oracle earlier this year (Bug 6011228) with a fix coming in a future release.

So what to do in the meantime? Creating a user defined metric is one approach, but has its limitations.

I couldn't help thinking that the support already provided for log file monitoring in Linux must already be 80% of what's required to run under Windows. A little digging around confirmed that. What I'm going to share today is a little hack to enable log file monitoring for a Windows agent. First the disclaimers: the info here is purely from my own investigation; changes you make are probably unsupported; try it at your own risk; backup any files before you modify them etc etc!!

Now the correct way to get your log file monitoring working would be to request a backport of the fix from Oracle. But if you are brave enough to hack this yourself, read on...

First, let me describe the setup I'm testing with. I have a Windows 10.2.0.2 agent talking to a Linux 10.2.0.2 Management Server. Before you begin any customisation, make sure the standard agent is installed and operating correctly. Go to the host home page and click on the "Metric and Policy Settings" link - you should not see a "Log File Pattern Matched Line Count" metric listed (if you do, then you are using an installation that has already been fixed).

To get the log file monitoring working, there are basically 5 steps:

  1. In the Windows agent deployment, add a <Metric NAME="LogFileMonitoring" TYPE="TABLE"> element to $AGENT_HOME\sysman\admin\metadata\host.xml

  2. In the Windows agent deployment, add a <CollectionItem NAME="LogFileMonitoring"> element to $AGENT_HOME\sysman\admin\default_collection\host.xml

  3. Fix a bug in $AGENT_HOME\sysman\admin\scripts\parse-log1.pl

  4. Reload/restart the agent

  5. In the OEM console, configure a rule and test it


Once you have done that, you'll be able monitor log files like you can with agents running on other host operating systems, and see errors reported in Grid Control like this:


So let's quickly cover the configuration steps.

Configuring metadata\host.xml
Insert the following in $AGENT_HOME\sysman\admin\metadata\host.xml on the Windows host. NB: this is actually copied this from the corresponding host.xml file used in a Linux agent deployment.
<Metric NAME="LogFileMonitoring" TYPE="TABLE">
<ValidMidTierVersions START_VER="10.2.0.0.0" />
<ValidIf>
<CategoryProp NAME="OS" CHOICES="Windows"/>
</ValidIf>
<Display>
<Label NLSID="log_file_monitoring">Log File Monitoring</Label>
</Display>
<TableDescriptor>
<ColumnDescriptor NAME="log_file_name" TYPE="STRING" IS_KEY="TRUE">
<Display>
<Label NLSID="host_log_file_name">Log File Name</Label>
</Display>
</ColumnDescriptor>
<ColumnDescriptor NAME="log_file_match_pattern" TYPE="STRING" IS_KEY="TRUE">
<Display>
<Label NLSID="host_log_file_match_pattern">Match Pattern in Perl</Label>
</Display>
</ColumnDescriptor>
<ColumnDescriptor NAME="log_file_ignore_pattern" TYPE="STRING" IS_KEY="TRUE">
<Display>
<Label NLSID="host_log_file_ignore_pattern">Ignore Pattern in Perl</Label>
</Display>
</ColumnDescriptor>
<ColumnDescriptor NAME="timestamp" TYPE="STRING" RENDERABLE="FALSE" IS_KEY="TRUE">
<Display>
<Label NLSID="host_time_stamp">Time Stamp</Label>
</Display>
</ColumnDescriptor>
<ColumnDescriptor NAME="log_file_match_count" TYPE="NUMBER" IS_KEY="FALSE" STATELESS_ALERTS="TRUE">
<Display>
<Label NLSID="host_log_file_match_count">Log File Pattern Matched Line Count</Label>
</Display>
</ColumnDescriptor>
<ColumnDescriptor NAME="log_file_message" TYPE="STRING" IS_KEY="FALSE" IS_LONG_TEXT="TRUE">
<Display>
<Label NLSID="host_log_file_message">Log File Pattern Matched Content</Label>
</Display>
</ColumnDescriptor>
</TableDescriptor>
<QueryDescriptor FETCHLET_ID="OSLineToken">
<Property NAME="scriptsDir" SCOPE="SYSTEMGLOBAL">scriptsDir</Property>
<Property NAME="perlBin" SCOPE="SYSTEMGLOBAL">perlBin</Property>
<Property NAME="command" SCOPE="GLOBAL">%perlBin%/perl</Property>
<Property NAME="script" SCOPE="GLOBAL">%scriptsDir%/parse-log1.pl</Property>
<Property NAME="startsWith" SCOPE="GLOBAL">em_result=</Property>
<Property NAME="delimiter" SCOPE="GLOBAL">|</Property>
<Property NAME="ENVEM_TARGET_GUID" SCOPE="INSTANCE">GUID</Property>
<Property NAME="NEED_CONDITION_CONTEXT" SCOPE="GLOBAL">TRUE</Property>
<Property NAME="warningStartsWith" SCOPE="GLOBAL">em_warning=</Property>
</QueryDescriptor>
</Metric>

In the top TargetMetadata, also increment the META_VER attribute (in my case, changed from "3.0" to "3.1").

Configuring default_collection\host.xml
Insert the following in $AGENT_HOME\sysman\admin\default_collection\host.xml on the Windows host. NB: this is actually copied this from the corresponding host.xml file used in a Linux agent deployment.
<CollectionItem NAME="LogFileMonitoring">
<Schedule>
<IntervalSchedule INTERVAL="15" TIME_UNIT = "Min"/>
</Schedule>
<MetricColl NAME="LogFileMonitoring">
<Condition COLUMN_NAME="log_file_match_count"
WARNING="0" CRITICAL="NotDefined" OPERATOR="GT"
NO_CLEAR_ON_NULL="TRUE"
MESSAGE="%log_file_message%. %log_file_match_count% crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold."
MESSAGE_NLSID="host_log_file_match_count_cond" />
</MetricColl>
</CollectionItem>

A bug in parse-log1.pl?
This may not be an issue in your deployment, but in mine I discovered that the script had a minor issue due to an unguarded use of the Perl symlink function (a feature not supported on Windows of course).

The original code around line 796 in $AGENT_HOME\sysman\admin\scripts\parse-log1.pl was:
...
my $file2 = "$file1".".ln";
symlink $file1, $file2 if (! -e $file2);
return 0 if (! -e $file2);
my $signature2 = getSignature($file2);
...

This I changed to:
...
my $file2 = "$file1".".ln";
return 0 if (! eval { symlink("",""); 1 } );
symlink $file1, $file2 if (! -e $file2);
return 0 if (! -e $file2);
my $signature2 = getSignature($file2);
...

Reload/restart the agent
After you've made the changes, restart your agent using the windows "services" control panel or "emctl reload agent" from the command line. Check the management console to make sure agent uploads have resumed properly, and then you should be ready to configure and test log file monitoring.

Sunday, May 20, 2007

Validating Oracle SSO Configuration

A failing OC4J_SECURITY process recently had me digging out an old script I had put together to test Oracle Application Server Single-Sign-On (OSSO) configuration.

How and where the OSSO server keeps its configuration is a wierd and wonderful thing. The first few times I faced OSSO server issues I remember digging through a collection of metalink notes to piece together the story. It was after forgetting the details a second time that I committed the understanding to a script (validateSso.sh).

Appreciating the indirection used in the configuration is the key to understanding how it all really hangs together, which can really help if you are trying to fix a server config issue. Things basically hang together in a chain with 3 links:

1. Firstly, the SSO server uses a privileged connection to an OID server to retrieve the OSSO (database) schema password.

2. With that password, it can retrieve the SSO OID (ldap) server connection details from the OSSO (database) schema.

3. Thus the SSO Server finally has the information needed to connect to the OID server that contains the user credentials.

The validateSso.sh script I've provided here gives you a simple and non-destructive test of all these steps. The most common problem I've seen in practice is that the OSSO schema password stored in OID gets out of sync from the actual OSSO schema password. I think various causes of these problems, but the script will identity the exact point of failure in a jiffy.

Monday, May 14, 2007

Getting your Oracle Forum posts as RSS

In my last post I said one of my "Top 10" wishes for OTN was to be able to get an RSS feed of posts by a specified member to the Oracle Forums.

At first it may sound a bit narcissistic to have a feed that allows you to follow what you have written yourself!

It was my exploration of Jaiku that prompted the thought. Since "presence" is their big thing, I've been experimenting to see what's its like to have Jaiku aggregate all your web activity. So far it looks really cool - I love the interface. Must say that I'm not sure how useful Jaiku may turn out to be in the long run ... I suspect it works best if you have a whole lot of your friends also using it. NB: the Jaiku guys are particularly focused on mobile phones. Its not something I've tried yet because I think it would be a bit ex from where I live.

So Jaiku was the catalyst for me thinking about getting an RSS feed of my forum posts. Recently I've been trying to make an extra effort to contribute to the forums; frankly, they've always seemed a little quieter than I think they should be. So seeing any forums posts I make highlighted on Jaiku should be one of the neat indicators of my "web presence".

Problem is, while you can get a web page that lists your recent posts, and you can subscribe for email alerts when authors post, I wasn't able to find a way of getting an RSS feed for a specific author's posts.

So I created a little perl script that scrapes the HTML and generates an RSS feed (using XML::RSS::SimpleGen). I've packaged it as a CGI program on a server I have access to. That's what I registered on my experimental Jaiku site, and it works like a charm.

Until Oracle build this feature into the forums, feel free to take my oracleForumRSS.pl script and experiment away. Its pretty basic, but is generic for any forum user and ready to go. Sorry, but I'm not hosting it for direct use by others, so you'd have to find you own server with cgi or convert it to script that spits out a static rss xml page instead.

Post-script: Eddie Awad blogged on the "Easy Way" to do this using Dapper. Very cool, thanks Eddie!

Saturday, May 12, 2007

OTN Semantic Web - first look

My last post on the state of the OTN community was probably a little long on rhetoric. So I thought since the move of Oracle Blogs! to the much vaunted Semantic Web seems well underway, it would be worth taking a first look. The old site has taken a bit of stick for being, let's say, less than engaging.

The big improvement by using the Semantic Web is of course all the slicing and dicing it allows to hone in on posts of interest, with the blogger tag cloud giving instant feedback (and access) to the bloggers most active on a given subject.

I got to say though it doesn't take long before you realise this site needs a serious design and usability facelift. Urgently!

Maybe its easy to be critical with something new. Actually, no "maybe" about it...

OK, I won't nitpick too much. The biggest problem is that we've paid for all the slice & dice flexibility in the worst possible way ... all the content has been squeezed off the page!

If Oracle do nothing else, they should push all the filtering and clouds out of the way to make a nice, big section of the page available for the star of the show - the content. Use AJAX to make all the filtering available in an instant, but avoid the clutter. At that point, we have a decent replacement for the old blog site.

But for OTN to truely find its Web 2.0 mojo, I think the Semantic Web is probably laying some important foundations but its just the beginning.

My Top 10 OTN/Semantic Web Wishlist
OK, so quick brainstorm, and here's a selection of things I'd really love to see happen on OTN:

  1. Take Down This Wall ... between content types (not what Justin originally meant, I know)! The current layout of the Semantic Web page puts concrete delineation between content types (podcast, blogs, forums and other personally attributed content) at the top of the page. At first that may sound fine, but its locking us into a mindset and behaviour patterns that assume these are all distinct in terms of production and consumption. A destructive and divisive fallacy. Get rid of it, and make the content type just another filter.

  2. Long live the forums! These are places where you can actually have a conversation (instead of trying to have a conversation in blog comments). OK, so they look a bit dated, and the level of participation can be a problem. Nothing a lick'a paint can't fix, and use the Semantic Web to drive usage.

  3. Give Oracle's "celebrity" bloggers a personal forum that links neatly off their blog so that conversations can be usefully launched on the back of a blog post. What the heck, let everyone have a personal forum!

  4. Let me take an RSS feed of my forum post history. Its the kind of thing I'd put on my Jaiku page. At the moment, we can just watch via email.

  5. Let me take an RSS feed of any Semantic Web page I find/define (with all the current filtering etc).

  6. Drop the barriers to participation. The Semantic Web makes concerns of the "quality" of bloggers irrelevent. If they never post, they get buried. OTN registration should include an opt-in for an Oracle-host blog, or a link to an externally hosted blog.

  7. An OTN Community Site Badge. To promote participation, how about a logo/widget/javascripty thing that bloggers can put on their blog? It would both brand their blog as part of the OTN community, and also provide a link back to the OTN Semantic Web. It would be neat if it actually had a function (like clustermaps), but I can't think of anything useful right now.

  8. Invert the Semantic Web. OK, we're getting used to coming top down. But what if I *start* from someone's blog post? It could be a really neat thing to be able to then explore the Semantic Web from that point out ... to discover related or linked pages. Worth an experiment I think.

  9. Drop the dopomene theme. I'm in the Semantic Web, discovering fascinating information in ways that have never been possible before. This is really exciting! ... but the look and feel is blaring a very incongruous message that messes with your psyche. We need a better use of colour and graphics.

  10. ... and a dozen other insanely cool Web 2.0 things that I can't even imagine right now, and if I could I'd be well on my way to my first $1b.



Wow, I'm actually getting excited now...

No respect! Should Justin care?

Justin "Dangerfield" Kestelyn launched one of the most lively discussions the Oracle community has ever had with his "I Don't Get It" post some weeks back.

"In particular," he states, "Oracle gets zero credit in this community for its rather aggressive support of blogging (by employees and nonemployees), despite the fact that a rather large blogging community exists and has for some time"

Strangely, there seems to be pretty unanimous agreement that there is a large number of bloggers out there, and some very good ones at that. Vincent McBurney's blog made special mention of Nishant Kaushik, Rob Smythe and Steven Chan for example.

And if we also consider the OTN Podcasts (my favourites being the ones that feature interviews with the "names" like Tom Kyte and Wim Coekaerts), it seems to me pretty evident that we actually have a pretty healthy community of content creators.

But when I look back at what Justin actually said, he was referring specifically to the lack of credit from the Web 2.0 community.

I think the OTN team - and Justin in particular - have been doing a fantastic job with the blogsphere and podcasts. But is that enough to make a stir in the Web 2.0 scene? Maybe a year or two ago it would have, but not any more. Sadly (for Justin) it is now just all too routine.

How many "Web 2.0 firsts" can OTN really claim? The harsh reality is that to make a splash and get some respect in the Web 2.0 community, Oracle needs to do much much more. And I don't think its about content or whether our blog etiquette is any good. Leadership and innovation is the name of the game in two important areas:

  • How to build more effective social networks and find new and better ways for this to deliver real benefit to the community. At present, I'm not sure we even deserve the "community" moniker .. it feels more like a public swimming pool we all just happen to go to, rather than a forum (in the Roman sense) where we meet, discuss and debate.

  • Invent and apply cutting-edge Web 2.0 techniques and technologies to support this goal. Yes, this IS about technology;) The Web 2.0 community is incredibly dynamic and creative at this point. Take blogs for example. They've been around for a while. Long enough for people to discover that for some things they are really good, but in other ways they suck (like trying to have a "conversation" in comments). So we now have sites like twitter, tumblr, virb and jaiku all experimenting with different approaches and trying to push the envelope in meaningful ways. Its this kind of creative experiementation that we haven't seen Oracle doing in the past ... with the one recent exception being the semantic web (hopefully an indicator of more great things to come). If Oracle really wants Web 2.0 street cred, OTN should be the playground where it is seen to be exploring the outer limits of what is possible - some of which may find its way back into the Fusion Middleware product line.


One notion we must definitely reject is that somehow we need to coach all the Oracle bloggers into becoming Web Celebs. To do so totally ignores (and destroys) the value of diversity in the community. Personally, I identify five "kinds" of web presence we should embrace:

  1. Leadership and Product Management as a "conversation". These are the celebs and thought leaders engaging with the community, but very much with their corporate responsibility at the fore. Funny thing is, I had the impression Oracle was doing much better in this regard, but it doesn't hold up to inspection. Mark Wilcox is one of the few getting close. Perhaps commercial considerations actually make it a very hard thing to do without tipping the competition too much, or just sounding like a mouthpiece for marketing.

  2. Web 2.0 as Shared Memory. I think one of the completely understated revolutions going on. As I've blogged before, and epitomised by the likes of Alejandro Vargas, this is all about using the web to finally Get Knowledge Management Right. These tend to be boring as hell to try and follow unless they are right in your niche. Scenario: One day, you'll be sweating a problem. Ask google, and thank your lucky stars that there are people around like Alejandro.

  3. Living your professional life online. Probably the most common approach today on OTN. Its a diary, scrapbook and log. You may find some really good gems, but there's no harm in being obscure in this category... you're just one of the community and its often done more for your own personal benefit.

  4. The personal/social presence. And yes there is room for all those who are part of the community (because they work at Oracle for example) but just want to talk about baseball!

  5. The audience. Let's not forget the vast majority of people who are searching and reading, but will never do much more that perhaps post a question to a forum or maybe a comment on a blog. For a whole range of reasons there's no value or motivation for them to go further. Don't try and make them blog. It won't work. But should we do everything possible to make sure they are well served by the community ... yes!! Numerically, they ARE the community.


Justin finished his initial post with a somewhat flippant "...maybe I shouldn't even care!". But perhaps he unwittingly hit the nail on the head.

It's a truism in business that if you forget who your customers are, you are doomed. Similarly, if OTN becomes preoccupied with impressing the Web 2.0 community as its primary mission, I'm pretty sure they will find success "inexplicably" elusive (and prove that all of Justin's denials of it being a PR conspiracy are lies!!).

Success will come most easily if OTN focuses on serving its real constituency first - the Oracle community of employees, users and developers. Do that well, and if OTN is indeed pushing the boundaries, then the Web 2.0 cred will be the just reward.

I guess in a way its like being cool. Try to be cool and you'll fail. You just are (or not, as the case may be).

Friday, May 11, 2007

Do Oracle temp tables behave correctly under DBI?

Andon Tschauschev recently posted on perl.dbi.users concerning an apparent problem with temp tables "disappearing" between statements in the same session. He was using SQL Server via ODBC support.

The discussion and investigation continues, but it made me think to test if there's any similar strange behaviour with Oracle via DBI.

The temporary table model is somewhat different in Oracle, and centers around the "CREATE GLOBAL TEMPORARY TABLE.." statement. Temp table definitions are always global, but data is always private to the session, and whether data persists over a commit depends on whether the qualification "on commit preserve rows" or "on commit delete rows" is specified.

testOraTempTables.pl is a simple test script to check out the behaviour. Good news is that all seems to be a-ok. The temporary table definition is persistent across sessions, but data is not, and importantly (..the point of Andon's investigation..) data is preserved across DBI calls within the same session as expected.

Sample output from the test program:
C:\MyDocs\Testers2\perl\dbi>perl testOraTempTables.pl orcl scott tiger
[1st connection] connect to orcl {AutoCommit => 1}:
[1st connection] create global temp table:
create global temporary table t1 (x varchar2(10)) on commit preserve rows
[1st connection] insert 3 rows of data into it: insert into t1 values (?)
[1st connection] should be 3 rows because we have "on commit preserve rows" set:
select count(*) from t1 = 3
[2nd connection] connect to orcl:
[2nd connection] should be 0 rows because while the table definition is shared, the data is not:
select count(*) from t1 = 0
[2nd connection] disconnect:
[1st connection] disconnect:
[1st connection] reconnect {AutoCommit => 0}:
[1st connection] should be 0 rows because this is a new session:
select count(*) from t1 = 0
[1st connection] drop the temp table: drop table t1
[1st connection] create global temp table:
create global temporary table t1 (x varchar2(10)) on commit delete rows
[1st connection] insert 3 rows of data into it: insert into t1 values (?)
[1st connection] should be 3 rows because we have autocommit off and not committed yet:
select count(*) from t1 = 3
[1st connection] should be 0 rows because now we have committed:
select count(*) from t1 = 0
[1st connection] disconnect:
[1st connection] reconnect {AutoCommit => 1}:
[1st connection] insert 3 rows of data into it: insert into t1 values (?)
[1st connection] should be 0 rows because we have autocommit on and "on commit delete rows" defined:
select count(*) from t1 = 0
[1st connection] disconnect:
[1st connection] reconnect {AutoCommit => 0}:
[1st connection] drop the temp table: drop table t1
[1st connection] disconnect:

Thursday, May 10, 2007

Burning DVDs with unicode filename support

I have quite an eclectic music collection, which grew considerably last year when I spent quite a bit of time in Tokyo. My favourite Sunday afternoon haunt was HMV @ Times Square, checking out the Shunjuku South indie chart. A suitcase of CDs later, I finally got around to ripping my entire CD collection and adding it my old ripped vinyl collection. All nicely organised, named and categorised in iTunes.

Feeling very pleased with myself, I wanted to archive all my hard work onto a set of DVDs, only to find that my (Windows-based) recording software baulked at the Japanese, Korean and Chinese characters in the folder and filenames.

My first thought .. should just need to change the file system format? But it didn't take long to discover it wasn't so simple. In fact, of the half a dozen different disk burning softwares I ended up trying out (including most of the "major" names), all but one failed to handle my babel of disks correctly.

So I'd like to spread the word. VSO CopyToDVD was the ONLY product that worked. It costs a few bucks to buy, but I just downloaded the limited-time trial and it passed the unicode test with flying colours. I didn't really poke around all its features, but it seems to pack the lot. And the fact that I could download it and bang out a few DVDs in short order pretty much sums things up.

I'll be buying it, and if you also need to burn disks with unicode file/folder names then I can recommend you check it out too.

Wednesday, May 02, 2007

Getting environment variables on the Oracle database server

Say you have a connection to a remote Oracle Database server and want to get the ORACLE_HOME setting. Or any other environment variable for that matter. As far as I can see, Oracle doesn't provide any direct, supported way to do this.
In 10g however, there's an interesting procedure DBMS_SYSTEM.GET_ENV available which does the job:
set autoprint on
var ORACLE_HOME varchar2(255)
exec dbms_system.get_env('ORACLE_HOME',:ORACLE_HOME)

PL/SQL procedure successfully completed.

ORACLE_HOME
-----------------------------------------
D:\oracle\product\10.2.0\db_1

DBMS_SYSTEM is an undocumented/unsupported package. It mainly seems to be an internal utility function for debugging and event monitoring. The package itself is obfusticated, but we can discover a little about it from the data dictionary. The USER_PROCEDURES view lists the individual procedures available in the package:
select PROCEDURE_NAME from USER_PROCEDURES where OBJECT_NAME = 'DBMS_SYSTEM';
PROCEDURE_NAME
------------------------------
DIST_TXN_SYNC
GET_ENV
KCFRMS
KSDDDT
KSDFLS
KSDIND
KSDWRT
READ_EV
SET_BOOL_PARAM_IN_SESSION
SET_EV
SET_INT_PARAM_IN_SESSION
SET_SQL_TRACE_IN_SESSION
WAIT_FOR_EVENT

And USER_ARGUMENTS can tell us about the parameters. For example:
select OBJECT_NAME,ARGUMENT_NAME,POSITION,DATA_TYPE,IN_OUT
from USER_ARGUMENTS
where PACKAGE_NAME='DBMS_SYSTEM' and OBJECT_NAME='GET_ENV'
order by POSITION;

OBJECT_NAME ARGUMENT_NAME POSITION DATA_TYPE IN_OUT
------------- -------------- ---------- --------- ------
GET_ENV VAR 1 VARCHAR2 IN
GET_ENV VAL 2 VARCHAR2 OUT

Given an environment variable name (VAR), GET_ENV returns its value (VAL). These values are coming from the system environment that belongs to the Oracle server process. If you have a dedicated server config, the environment is inherited from the tnslsnr process that spawned the server process. If shared server, then the environment is inherited from whatever process (PMON? PSP0?) that started the shared server process.
So an interesting poke around in some Oracle internals, but there are lots of reasons why you shouldn't use this trick in any production situation!

  • It is undocumented and unsuppported. The "get_env" method seems to have appeared in 10g, but there's also no guarantee it will be present in any future versions.

  • There are better solutions. SQL client code shouldn't directly depend on server environment variables.

  • Remember it is instance specific, and may be misleading in a RAC environment.