Follow Us on Twitter
by admin on May 25, 2006 12:56pm
The Future of IdMU, help us help you...
I would like to thank everyone who posted comments in my Identity Management for UNIX intro web session. While I am keen on getting your feedback on Windows 2003 R2 and Longhorn Beta releases, I am also interested in getting your views on the direction you feel that this product should take for future releases. I have received good feedback so far on topics such as: expanding IdMU feature-set to support authentication over LDAP, and providing a Kerberos based solution that knits well with AD.
I would like to hear more ideas and request your opinion on what direction you feel IdMU should take next.
Please take a moment to comment below or submit mail to firstname.lastname@example.org with the subject: IdMU Ideas. I will be responding to comments and email and look forward to a productive discussion.
by admin on May 24, 2006 06:11pm
Submitted by: Alexandre Ferrieux
I'd like to describe what is my highest frustration at the unix-Windows boundary: the lack of 'file descriptor abstraction' in Windows.
In unix everything is a file descriptor, on which you simply use read(), write(), and select() regardless of the underlying reality (files,pipes,sockets,devices,pseudoterminals).
In windows you have a set of API for every new type, with a few bridges here and there with limited support (not even talking of Windows CE).
Here is my point: this may look like just aesthetic considerations (the sheer beauty of having fewer syscalls is irrelevant to the end user). But there is one catch: when it comes to *mixing* all these things together, complexity explodes in the Windows case, and in my case there are true show-stoppers.
More precisely: let's try single-threaded, event-driven programming with select()/poll()/WaitForMultipleObjects(). In unix it amounts to giving a list of file descriptors. In Windows it is superficially the same (with handles), but it is *not*, because many handle types are just not waitable. To circumvent that, of course there is overlapped IO. But it is only possible when you open the handle yourself (to allow overlapped mode), not for one you inherit from the parent (like stdin).
Then the only possible workaround is to spawn extra threads doing their blocking IO or type-specific wait. But when it comes to resource-challenged environments (like WinCE), spawning an extra thread is sometimes not an option.
(1) Are MS aware of such limitations in WinCE and even XP ?
(2) Any smart workaround to save me right now ?
Answer (Jeffrey Snover, Architect: Administration Experience Platform):
PowerShell provides a similar abstraction on top of the OS and are working with feature teams to get providers. We call these namespace providers. It is slightly different than the UNIX model but we think it is more powerful. Our design center was admin scripting so we need to provide these abstractions against the Registry, WMI, AD, SQL, CERT STORES, etc.
If you download PowerShell and explore the concept of DRIVES (Get-PSDRIVE) then the concept will become a little clearer.
I'll monitor the comments here if you have any additional questions/feedback.
by admin on May 24, 2006 01:57pm
I'm reading Open Sources 2.0 right now. It's a well-composed book of short essays by founders and luminaries of the Open Source movement - people like Chris DiBona, Ian Murdock, Matt Asay, and Danese Cooper, to name just a few.
So far I've read essays by Mitchell Baker (Mozilla), Chris DiBona (Google/Slashdot), Jeremy Allison (Samba), Ben Laurie (Apache), and Michael Olson (Sleepycat). They are all well-written and insightful. The most consistent conclusion that the essays I've read so far is that where development is concerned, Open Source development is not that different from commercial software development - similar (although usually more rapid) lifecycles, requirements and bug tracking. Key differences that the various authors cite are greater passion and willingness by open source developers to go beyond "working hours" to solve problems, and the general lack of interest in writing documentation as opposed to coding. This short summary unfortunately trivializes the excellent essays and I encourage you to buy the book and read them yourselves. I believe that Mitchell Baker's essay in particular offers the most powerful lessons for proprietary & commercial software development companies on how to adapt their practices into shipping open source software. In the Mozilla project, Mitchell was at the forefront of the wrenching practical and emotional shifts required from both AOL/Netscape management and the open source contributors to the project. Interestingly, Ben Laurie attacks the idea that "many eyes make all bugs shallow", one of the key claims about open source software quality. I myself have been a fan of this idea, and I was surprised to see him dispute it. To put his statements in context, however, Ben is specifically discussing security flaws, which he defines as being of a different class of problem from a standard "bug" or software defect. His point is that it takes deep expertise and hours of dedicated effort to find security flaws, and that most eyes cannot see them.
The most provocative essay I've read so far is by Michael Olson, who discusses the concept of a "dual licensing" model in detail. In short, dual licensing is a commercial Open Source software (COSS) approach that uses the GPL to convert full ownership of software IP into a self-sustaining open source community, while selling a proprietary license of the same source to proprietary vendors. The proprietary license grants the buyer more rights, including no reciprocity - not needing to release their own product under the GPL. This way, paying customers get the benefit of the open source product while retaining much stronger IP protection for themselves. Michael's summary of this balanced model is that the licensing & technology combination must be designed so that "Open source users experience only pleasure in their use of the software" while causing enough pain (Michael's word) to enough customers to make the business of selling proprietary licenses profitable.
This comes to mind most strongly to me because of some of the debates I've seen in the comments on Port 25. Some readers believe that any commercialization of open source software is downright wrong, and a violation of the principles of Open Source. Other readers seem quite willing to allow developers of open source software to make a living from their work. I think this may be an irresolvable dispute - a clash of ideals between Open Source as a movement and open source as a development, marketing, and commoditization model.
by admin on May 22, 2006 05:08pm
Hope you're all doing well - I'd like to try and get some traction on the resource kit that we talked about earlier.
As a start, I pulled together a list of everything I could find from MS that might be open source related. I thought this could be a start of our 'manifest'. Once we figure what should be included, we can think about how to distribute it. A web page off of MSDN is the easiest thing; maybe that's enough, maybe there is more. I'd be interested to hear your thoughts.
Also, I haven't had any luck contacting the Igloo author - I wanted to let them know about our recent change in licensing that would allow him to remove the clauses in his licensing. I've emailed him, but haven't gotten a response or seen a change in his web site. If any of you guys have his ear, please let him know that I come in peace :)
Anyways, here is a list (title followed by URL).
EMCA CLI 1.0 http://www.microsoft.com/downloads/details.aspx?FamilyId=3A1C93FA-7462-47D0-8E56-8DD34C6292F0&displaylang=en
ASP.NET Samples http://www.asp.net/default.aspx?tabindex=5&tabid=41
SCC plug-in for TortoiseCVS http://sourceforge.net/projects/cvssccplugin/
SQL Server 2005 Express http://msdn.microsoft.com/vstudio/express/sql
Visual Web Developer 2005 Express http://msdn.microsoft.com/vstudio/express/sql
by admin on May 22, 2006 12:47pm
We had a great week here because we all got to spend a lot of time out of the office meeting people from all over the world who came here to attend an event in Seattle. A lot of people I ran into had specific and pinpointed questions on various technologically perplexing scenarios they had encountered or were at the forefront of some hard questions from their customers on technology management. Something in these conversations always sticks in my head and today, I learnt some very surprising details on the "push" for automation, process as well as technology based. What I heard in these discussions reinforced the core fundamentals of Technology Management such as “never replace an expert with what you think is a good application”. Let me explain what I mean:
Let’s start by discussing the role of automation, in general, be it scripting or process based and the role it plays in the life of an IT Operations Professional. I uncovered two scenarios, the first where automation was a key in driving efficiencies, increasing reliability, predictability and lowering TCO. The second scenario elicits a situation where unnecessary automation was implemented instead of the need of genuine expertise, with an undesirable outcome of course. Scenario 1 – Successful implementation of Automation: The need for automation is almost always driven by a business need mapping back to a simple, repeatable process. Let’s say one of the tasks you’re responsible in a large environment is maintaining and updating DNS records. Take something as simple as changing a DNS record and assigning a new address to the entry. This would be a perfectly simple AND repeatable process that screams the need to automate. To put a simple script or a comprehensive tool around such a scenario would be prudent and wise as it takes the repetition out of the job, and making the overall process less error-prone, more automated and dependable
Scenario 2 – Unnecessary use of automation: The same process can only be automated only to a certain extent, after which human expertise becomes critical. Continuing the path of the process in Scenario 1, once the DNS records get changed, it’s very easy to setup an additional script or automation around “validation”. By validation I mean double-checking the change to make sure that the prior and successive entries are accurate leaving no room for any error. Why is validation necessary – well, because once the DNS records are changed and are propagated through the environment, an incorrect entry can wreak havoc and make the busiest server unresponsive to any name resolution request. However, having a resident analyst who can validate all entries of the request, check the addresses manually before entering them into the script/tool and doing post-change validation only helps eliminates the scope of an “outage” , the one word every Operations Professional dreads
In conclusion, I’d like to say that our goal as contributors to Port25 is to always try to put forward instances and knowledge that help the IT Community run their environment/s. Therefore, if there is something specific, be it technology or operations methodology that you'd like us to dig deeper into, your ideas are ALWAYS most welcome.
Thank you all, have a great week ahead !!
by admin on May 18, 2006 10:30am
Mysteries of Cygwin...
-----Original Message----- From: steffen Sent: Saturday, April 29, 2006 8:32 AM To: Port 25 Subject: (Port25) : You guys should look into _____ Importance: High
cygwin and its mysteries to bring Linux software to Windows
I am using my wife's XP machine a lot after work and hope to compile kdissert (a mindmap tool) for cygwin. It works on coLinux for me already (which you should also discuss) but I felt like not booting something extra.
My effort ended before it really started .... a file aux.h could not be untared. Google told me that this was a special problem with Windows as aux.[ch] are reserved names. This is hillarious.
Pleeze ... fix this behaviour and ... give me kdissert.
This is a common problem when porting applications to a Win32 as AUX is one of the few reserved filenames. Other reserved files, regardless of extension, are AUX, LCOMn, CON, LPTn, NUL, and PRN. (a lower case n represents a digit, so LPT1 or LCOM2 would be reserved names) The only good workaround is to rename the reserved filenames.
I took a quick look at the kdissert source and found aux.h is only included in 22 files. I would suggest renaming aux.h to something else like kdissert_aux.h and either manually editing the source files or use sed (or enter your stream editor of choice) to make it a little less painful.
Great you say, buy how do you rename or extract a file from an archive if Windows won't let you create it in the first place? Tar just hangs when it gets to the aux.h file from kdissert-1.0.6pre3.bz2. The easiest solution would be to rename and modify the source on a separate Linux machine or VPC. However, if all you have access to is a Windows machine with Cygwin, you can still work around this problem.
Extract out the contents of aux.h to another file using tar.
$ tar jxvf kdissert-1.0.6pre3.tar.bz2 --to-stdout \ kdissert-1.0.6pre3/src/kdissert/datastruct/aux.h > kdissert_aux.h
Make sure to exclude aux.h when un-tarring so tar doesn't err out
$ tar -jxvf kdissert-1.0.6pre3.tar.bz2 --exclude \ kdissert-1.0.6pre3/src/kdissert/datastruct/aux.h
Copy kdissert_aux.h to the correct place
$ cp kdissert_aux.h kdissert1.0.6pre3/src/kdissert/datastruct/kdissert_aux.h
Modify the source files to use the newly named kdissert_aux.h.
This should at least get you started towards porting kdissert to Win32/Cygwin. You also might want to check out and keep an eye on KDE's native Windows development, since further development of KDE on Cygwin has stopped.
Also see: http://www.microsoft.com/technet/interopmigration/unix/sfu/portappc.mspx
KDE on Cygwin http://kde-cygwin.sourceforge.net/ http://mail.kde.org/pipermail/kde-cygwin/2005-September/003009.html
Development of native KDE on Windows http://sourceforge.net/forum/forum.php?forum_id=507276
by admin on May 17, 2006 02:05pm
VC Summit 2006
Last week I attended the Microsoft VC Summit at our Silicon Valley campus. Before Microsoft and IBM, I helped to build four start-ups, three in the Bay area, so spending a day with a couple hundred VC folks talking about industry trends and business models and in general networking with some great people, was a lot of fun. I did a Q&A onstage with Scott Sandell from NEA on Microsoft, Open Source, our strategy and the relationships to the venture capital community. One of the questions that we spent a good deal of time discussing was the impact of open source software to the venture community. It was an interesting discussion about defining and measuring a successful open source software company. In my opinion, many of these companies are either evolving or starting out with business models that incorporate open source ‘components’ with commercial components (Greenplum is a good example of this), largely because selling support and services for non-differentiated commodity software is not proving to be a sustainable revenue generating model for most of the commercial OSS companies.
Steven Weber’s ‘The Success of Open Source’ discusses this in detail, and Stephen Walli and Matt Asay have been blog-debating this recently as well. I’m interested in business models and I’m interested in analyzing the history of business models. I think one aspect that is often left out of this discussion is that some of these OSS companies have been around for a while, so there is a reasonable history to look back at and measure. Red Hat and MySQL were both founded around 1995 (if memory serves me), and many others can be tracked back six, seven plus years as well. So the question that was discussed at the VC Summit, as well as just this week in the blogosphere, is what qualifies a commercially successful OSS company, and (importantly for investors) how do they rank comparatively to other commercial software companies that VCs may be considering as a potential portfolio company? These venture specific conversations were, of course, very much focused in benchmarking revenue and profitability. I talk a lot about the evolution of commercial and open source models and I think this type of analysis will influence the evolution as vendors, customers, and the investment community start to take a realistic look at the pros and cons of the model. For more information on the VC summit, Don Dodge has a good summary here.
by admin on May 13, 2006 09:59am
Information Accessibility – Why do we need our data everywhere?
An the old Chinese proverb goes “May you live in interesting times”. Yes, we are truly blessed to live in such interesting times as these. You may ask yourself why? That can be for several reasons. The past twenty to thirty years has fundamentally changed the way we treat information as compared to before. We have witnesses a commoditization of information since the accelerated growth of Information Technology sector began in the late seventies. We have all witnessed a paradigm shift in the value associated with information and its evolution into knowledge. While I am very tempted to walk the readers through the entire cycle, I am sure you’d rather hear about the technological impacts of this change in people’s lives, the foremost of being our attitude towards Information Accessibility.
74% of people in the US own a cell phone or mobile device of some sort. A growing percentage of users within sector have started to diversify from traditional mobile devices (voice-only or data only) to the type that can do both or more. What happens when a growing percentage of people discover that they can access the information they need or want at any time or at any place – they seem want it NOW. This has ignited the phenomenal growth in the mobility ecosystem (hardware, applications, carriers etc.) thus pushing the envelope on not only the innovation but also the boundaries of socio-psychological and socio-cultural norms.
So how does this impact the business of technology: To find out more I spoke w/ Deepak Jhangiani, a senior consultant in the wireless industry on emerging trends in the information accessibility business. Deepak elicited several key areas which are at the forefront of this change. Let’s see what these are:
All in all, I think we’re looking at a very productive, exciting and interesting time in mobility and “Information Accessibility. I don’t know about you guys but I always though computers were supposed to adapt to us and not the other way round.
by admin on May 11, 2006 05:36pm
Jason Zions pointed us to this newly revised Windows Security and Directory Services for UNIX solution guide, still in beta.
Description of the Guide and access instructions from Luis Camara Manoel, Program Manager, Collaboration Solutions Team:
"The Windows Security and Directory Services for UNIX v1.0 Beta guide provides several solutions for enabling interoperability between UNIX and Windows infrastructures. The solutions included in this Beta release describe multiple options to achieve two different end states:
To download and read the solution online, please visit Microsoft Connect:
Per Jason: "Although the guide gives step-by-step instructions for setting up AD integration only for Solaris 9 and for RedHat 9, it's pretty easy to see how it would extend to lots of other UNIX and UNIX-like systems."
To download the paper you'll need a Passport Account (the link above will prompt you for one) which is used to send updates, if you wish, on new releases and an announcement when the guide is final.
Let us and Luis know what you think!
by admin on May 10, 2006 07:03pm Share your interop story...
...and find fame and fortune! Maybe not, but we will share your story with the World! (At least that part of the world that visits Port 25...)
We're looking for unique and creative ways that you, the visitors to Port 25, have solved interoperability challenges. Whether you've done something creative in your own mixed environment or you've helped a customer overcome a technology challenge, we want to hear from you. Monthly we'll showcase the most unique and challenging stories submitted by individuals, integrators and ISV's on Port 25.
Submissions should include:
We'll work with the project submitter to determine the best way to highlight your success on the site. If your project is chosen we'll send you a Port 25 software care package.
Our first submission deadline will be May 25th at 12:00 am PDT.
Please send submissions to: email@example.com with the subject: "Interop Challenge".
by admin on May 08, 2006 08:51pm
Feedback loops are critical. One of my favorite experiences working with Open Source Software came in 1999 when I was working at eToys.com. We used Apache and mod_perl heavily, and we also did some interesting proxy/caching work with mod_proxy. At the time, since we were one of a very few high volume web sites doing this type of work with Apache and mod_proxy, we found a variety of issues. One of the issues we found was a bug (in an unusual code path) that allowed a users session id to be cached in the Apache server and then accessed by a different user. Unintentional sharing of sessions on an ecommerce site is typically not a good thing. So we wrote a very simple and short bit of code to fix this and submitted it to the mod_proxy maintainer. It was accepted and we got our first ‘+1’ open source commit – and felt like minor champions that day. However, the developer discussion email list started percolating with complaints that this change broke a few other Apache+mod_proxy configurations on different platforms than I had been testing and running on (eToys.com was largely Linux based). So we quickly (within minutes) saw emails on the list and some direct to me saying the patch busted some of their Apache configurations on Solaris, HP-UX or BeOS or whatever it was I wasn’t testing on. I was a little surprised, but also amazed at the immediacy of the feedback. Then I had a sinking feeling – what else might I have broken, but haven’t yet received feedback on? Knowing that I was not that good of a programmer, worry led to fret and fret led to genuine stress. In discussions with the package maintainer, we agreed to back the patch out. Although a little disappointed it made me realize the power of the community feedback loop, but also the ad hoc nature of how open source software was tested, basically by other developers on whatever they may be running.
In some ways this is good and helps drive the Darwinian nature of the bigger OSS projects. In some ways this is scary because it is really an inconsistent testing model. Andrew Morton recently discussed this issue about Linux kernel testing. So how do you tell when the community feedback loop is significant enough (either community size, developer talent, frequency, methodology, tools, etc.) to represent enough critical mass that your OSS project will be used and vetted broadly enough to represent some degree of satisfactory testing coverage? I have some ideas, but I want to open this one up for discussion: what tactics, strategies or even instincts do you use to assess the quality of an OSS project?*
* Also, I’m quite familiar with the various OSS maturity model projects out there. However, I’m skeptical as most of these that I’ve seen are driven by consultants who use these as tools to charge you a service for helping you to assess a given piece of OSS. This is fine, but it isn’t the question I’m asking (consultants have been doing this for years). However, there is the Carnegie Mellon West led OpenBRR project that looks interesting, but I know many of you have ideas on ways you have done this in the past – so in short I’m interested in your experiences.
by admin on May 06, 2006 11:36am
Monitoring Thresholds – Protectors of the Information Overload
British author and inventor Arthur C. Clarke* once said “Any sufficiently advanced technology is indistinguishable from magic “. Case and point being the extent to which we can monitor, maintain and manage our environments today compared to the processes and methods from 30 years ago. Today we have more scripts, more monitoring tools and more methods and ways to manage technology than ever before. Implementation of key technologies, on the right scale with the right amount of training behind it for the folks in the trenches could mean a disastrous outage averted or several hours of downtime. Falling back to my days in operations, when someone would ask me what the top three challenges of running a Production environment, spread across six countries and eight datacenters were, I would say “Monitoring, Managing and Maintenance”, the three M’s of Technology.
You see, today we have more of everything – more applications, more platforms, more development and that too at a more accelerated pace than experienced ever before. Whether you’re a 10,000+ user environment or a startup site, “Uptime” is and will always be a key factor in determining the success of what you’re offering. Your data, service or whatever you’re hosting or offering to the customer is more than likely to be bound to an SLA (Service Level Agreement) of some type. SLA’s are put in place to ensure that outages, issues or escalations of any kind are attended to in a time bound framework. That directly puts an enormous amount of burden as well as pressure to ensure that any issues emerging from within the environment are escalated immediately. This means that the managing the monitoring, alerting and maintenance of our environments is key especially from a backend services standpoint.
So what’s the best approach – more monitoring tools and more alerts? No, exactly the opposite. The phrase “do less with more” has never been more relevant. The catch in implementing monitoring and management solutions is in the thresholds. This holds true for hundreds of folks out there who work as Operational Analysts as Tier-1 support. These folks are the first ones to witness and respond to any alert that’s triggered off by monitoring tool/s within the environment. If the environment is simply gigantic, it only means that the trigger/thresholds have to be carefully examined, customized/massaged, set and constantly managed to avoid desensitization.
What’s the best approach to take in such a scenario? Well, the one thing we always want to avoid in any Operations Control Center is “desensitization” to alarms or alert of any kind. When a specific alarm or alert goes off too many times and you’re asked to ignore it, the desensitization sets in. Which means that further alarms and alerts will not get the attention or scrutiny necessary. Thus the title of today’s blog – Monitoring Thresholds – Protectors of the Information Overload. This may not be a good example but it’s the best one I can some up with after a long week, so here goes: In the movies, you might have remembered one of those oft played scenes when someone is trying to cut the wires to an explosive device and cutting the right wire would mean life and the wrong one would mean death. That’s the best way I can describe the sensitivity behind setting thresholds. Why – because the larger the environment, the more complex and difficult it is to retain sensitivity towards the environment.
Takeaway: Equilibrium in an ideal Ops world is when you have suppressed the “noise” without suppressing a genuine incident.
* My mistake on the quote, it was not Isaac Asimov.
by admin on May 05, 2006 03:16pm
Sam interviews Shamit Patel, Project Manager for Identity Management for Unix (IdMU), to discuss the current state of the project and solicit feedback from the Port 25 community to help shape future versions of the project.
Format: wmv Duration: 17:56
Want to learn more? Check out Interop Systems: UNIX Tools for Windows - another great community site for UNIX users, sysadmin's and developers who find themselves needing to interoperate with Windows and Linux systems.
by admin on May 05, 2006 03:08pm
Over the past four weeks, we've been very excited by the activity on Port 25 - the participation has been very encouraging and has largely kept a positive & healthy perspective. Conspiracy theories aside, this is exactly the kind of thoughtful conversation we want to continue to have. And our key message will not change - it is a heterogenous world in computing and customers of all sizes will always expect solutions to "just work." Interoperability between platforms - such as Windows, Linux and UNIX, is key to meeting that expectation.
But getting IT to "just work" is not easy - by any standard. Most of us live this challenge every day, understanding full well that today's solutions will only be dwarfed by tomorrow's challenges.
That's why we're interested in hearing about your biggest - and smallest - technical challenges. Send them in - your toughest pains, trivial pet peeves - and we'll try & answer them. We've already started here and here. But we know there are a ton more! So starting today, we're calling all technical questions.....send e-mail directly to our lab.
We'll read through each & every one - and start building out responses to help all of us "just work" better together :)
by admin on May 02, 2006 12:08pm
Swimming is my cure for jet lag. I am currently at the MES 2006 (Microsoft Executive Summit) in Mumbai (Bombay), India – an annual event for the top 250 CIO’s in India. I’ve been here a couple days and have been waking up at 3am, so my cure has been a pre-dawn swim in the hotel pool. The hotel I’m staying at has a nice pool, right next to Powai lake and it’s protected forest, where I later learn is a natural home to leopards and alligators, which keeps me alert while swimming solo.
MES 2006 is a great opportunity to meet many of the top CIOs, IT decision makers and partners in India and I’ve been enjoying it immensely. I did a presentation on our platform strategy, people-ready businesses, and in particular how we think about ‘coopetition’ in this strategy. One of my favorite things about this job is talking to customers about their IT environments, issues and dreams. What has been fascinating about my conversations here in Mumbai has been the many different ways customers have designed and architected for interoperability. From banks to manufacturing companies to consulting services, almost every customer or partner I’ve met with has an interesting story about interoperability in their IT environment. No surprise really, heterogeneity is part of any large IT system, but the recurring theme I’ve noticed here is the pragmatism and clarity of focus on where and when interoperability is needed. And if you’ve ever spent time on the roads in Mumbai, you’ll realize that interoperability is part of everyday life!
One Microsoft partner I had dinner with explained a very large, multi-tier system they build and sell which uses Windows, Unix systems, and a mainframe – all for one application (it is a large and critical application, so this isn’t really overkill for what they do). Although they want to eventually migrate from the mainframe for cost reasons, they have chosen technologies to get the job done as best suited their needs and skills. And – importantly – they factor interoperability into every architectural plan, RFP, or design that they think about – it’s as important to them as feature functionality or testing. It’s a core part of their maturity model. So what do they look for to qualify something as ‘interoperable’? Open and mature standards that have industry wide acceptance. They also understand the difference between open standards and open source, and gave me a very lucid walkthrough of the differences. Simply put, they explained that open source is a development and distribution model and open standards are specifications that can be applied to interfaces and technologies to enable data exchange. It is that clarity that, I believe, has helped them to design for interoperability with their eyes wide open.
One more thing. I had a customer meeting where I heard a great description of IT value. We were talking about software utilization, the dreaded ‘application backlog’ that many CIO’s face (CIO magazine has a great column on this here). When I asked about their deployment experiences with Microsoft software, the customer told me a story about their instant messaging deployment. Within 48 hours of deploying Microsoft Live Communications Server for instant messaging and collaboration, they had over 16,000 people utilizing the product. He then said, “Listen Bill, it is actually quite simple, when I can deploy software that immediately 16,000 people start using on their own because it’s important and useful to them, that is value.” Clear and simple definition of value: people use it on their own volition - something we should all remember.
Although swimming does help jet lag, coffee is equally important, so it’s now time to go find some. Until next time, may you avoid stray leopards. -Bill