Follow Us on Twitter
by admin on May 13, 2006 09:59am
Information Accessibility – Why do we need our data everywhere?
An the old Chinese proverb goes “May you live in interesting times”. Yes, we are truly blessed to live in such interesting times as these. You may ask yourself why? That can be for several reasons. The past twenty to thirty years has fundamentally changed the way we treat information as compared to before. We have witnesses a commoditization of information since the accelerated growth of Information Technology sector began in the late seventies. We have all witnessed a paradigm shift in the value associated with information and its evolution into knowledge. While I am very tempted to walk the readers through the entire cycle, I am sure you’d rather hear about the technological impacts of this change in people’s lives, the foremost of being our attitude towards Information Accessibility.
74% of people in the US own a cell phone or mobile device of some sort. A growing percentage of users within sector have started to diversify from traditional mobile devices (voice-only or data only) to the type that can do both or more. What happens when a growing percentage of people discover that they can access the information they need or want at any time or at any place – they seem want it NOW. This has ignited the phenomenal growth in the mobility ecosystem (hardware, applications, carriers etc.) thus pushing the envelope on not only the innovation but also the boundaries of socio-psychological and socio-cultural norms.
So how does this impact the business of technology: To find out more I spoke w/ Deepak Jhangiani, a senior consultant in the wireless industry on emerging trends in the information accessibility business. Deepak elicited several key areas which are at the forefront of this change. Let’s see what these are:
All in all, I think we’re looking at a very productive, exciting and interesting time in mobility and “Information Accessibility. I don’t know about you guys but I always though computers were supposed to adapt to us and not the other way round.
by admin on May 24, 2006 06:11pm
Submitted by: Alexandre Ferrieux
I'd like to describe what is my highest frustration at the unix-Windows boundary: the lack of 'file descriptor abstraction' in Windows.
In unix everything is a file descriptor, on which you simply use read(), write(), and select() regardless of the underlying reality (files,pipes,sockets,devices,pseudoterminals).
In windows you have a set of API for every new type, with a few bridges here and there with limited support (not even talking of Windows CE).
Here is my point: this may look like just aesthetic considerations (the sheer beauty of having fewer syscalls is irrelevant to the end user). But there is one catch: when it comes to *mixing* all these things together, complexity explodes in the Windows case, and in my case there are true show-stoppers.
More precisely: let's try single-threaded, event-driven programming with select()/poll()/WaitForMultipleObjects(). In unix it amounts to giving a list of file descriptors. In Windows it is superficially the same (with handles), but it is *not*, because many handle types are just not waitable. To circumvent that, of course there is overlapped IO. But it is only possible when you open the handle yourself (to allow overlapped mode), not for one you inherit from the parent (like stdin).
Then the only possible workaround is to spawn extra threads doing their blocking IO or type-specific wait. But when it comes to resource-challenged environments (like WinCE), spawning an extra thread is sometimes not an option.
(1) Are MS aware of such limitations in WinCE and even XP ?
(2) Any smart workaround to save me right now ?
Answer (Jeffrey Snover, Architect: Administration Experience Platform):
PowerShell provides a similar abstraction on top of the OS and are working with feature teams to get providers. We call these namespace providers. It is slightly different than the UNIX model but we think it is more powerful. Our design center was admin scripting so we need to provide these abstractions against the Registry, WMI, AD, SQL, CERT STORES, etc.
If you download PowerShell and explore the concept of DRIVES (Get-PSDRIVE) then the concept will become a little clearer.
I'll monitor the comments here if you have any additional questions/feedback.
by admin on May 22, 2006 05:08pm
Hope you're all doing well - I'd like to try and get some traction on the resource kit that we talked about earlier.
As a start, I pulled together a list of everything I could find from MS that might be open source related. I thought this could be a start of our 'manifest'. Once we figure what should be included, we can think about how to distribute it. A web page off of MSDN is the easiest thing; maybe that's enough, maybe there is more. I'd be interested to hear your thoughts.
Also, I haven't had any luck contacting the Igloo author - I wanted to let them know about our recent change in licensing that would allow him to remove the clauses in his licensing. I've emailed him, but haven't gotten a response or seen a change in his web site. If any of you guys have his ear, please let him know that I come in peace :)
Anyways, here is a list (title followed by URL).
EMCA CLI 1.0 http://www.microsoft.com/downloads/details.aspx?FamilyId=3A1C93FA-7462-47D0-8E56-8DD34C6292F0&displaylang=en
ASP.NET Samples http://www.asp.net/default.aspx?tabindex=5&tabid=41
SCC plug-in for TortoiseCVS http://sourceforge.net/projects/cvssccplugin/
SQL Server 2005 Express http://msdn.microsoft.com/vstudio/express/sql
Visual Web Developer 2005 Express http://msdn.microsoft.com/vstudio/express/sql
by admin on May 11, 2006 05:36pm
Jason Zions pointed us to this newly revised Windows Security and Directory Services for UNIX solution guide, still in beta.
Description of the Guide and access instructions from Luis Camara Manoel, Program Manager, Collaboration Solutions Team:
"The Windows Security and Directory Services for UNIX v1.0 Beta guide provides several solutions for enabling interoperability between UNIX and Windows infrastructures. The solutions included in this Beta release describe multiple options to achieve two different end states:
To download and read the solution online, please visit Microsoft Connect:
Per Jason: "Although the guide gives step-by-step instructions for setting up AD integration only for Solaris 9 and for RedHat 9, it's pretty easy to see how it would extend to lots of other UNIX and UNIX-like systems."
To download the paper you'll need a Passport Account (the link above will prompt you for one) which is used to send updates, if you wish, on new releases and an announcement when the guide is final.
Let us and Luis know what you think!
by admin on May 06, 2006 11:36am
Monitoring Thresholds – Protectors of the Information Overload
British author and inventor Arthur C. Clarke* once said “Any sufficiently advanced technology is indistinguishable from magic “. Case and point being the extent to which we can monitor, maintain and manage our environments today compared to the processes and methods from 30 years ago. Today we have more scripts, more monitoring tools and more methods and ways to manage technology than ever before. Implementation of key technologies, on the right scale with the right amount of training behind it for the folks in the trenches could mean a disastrous outage averted or several hours of downtime. Falling back to my days in operations, when someone would ask me what the top three challenges of running a Production environment, spread across six countries and eight datacenters were, I would say “Monitoring, Managing and Maintenance”, the three M’s of Technology.
You see, today we have more of everything – more applications, more platforms, more development and that too at a more accelerated pace than experienced ever before. Whether you’re a 10,000+ user environment or a startup site, “Uptime” is and will always be a key factor in determining the success of what you’re offering. Your data, service or whatever you’re hosting or offering to the customer is more than likely to be bound to an SLA (Service Level Agreement) of some type. SLA’s are put in place to ensure that outages, issues or escalations of any kind are attended to in a time bound framework. That directly puts an enormous amount of burden as well as pressure to ensure that any issues emerging from within the environment are escalated immediately. This means that the managing the monitoring, alerting and maintenance of our environments is key especially from a backend services standpoint.
So what’s the best approach – more monitoring tools and more alerts? No, exactly the opposite. The phrase “do less with more” has never been more relevant. The catch in implementing monitoring and management solutions is in the thresholds. This holds true for hundreds of folks out there who work as Operational Analysts as Tier-1 support. These folks are the first ones to witness and respond to any alert that’s triggered off by monitoring tool/s within the environment. If the environment is simply gigantic, it only means that the trigger/thresholds have to be carefully examined, customized/massaged, set and constantly managed to avoid desensitization.
What’s the best approach to take in such a scenario? Well, the one thing we always want to avoid in any Operations Control Center is “desensitization” to alarms or alert of any kind. When a specific alarm or alert goes off too many times and you’re asked to ignore it, the desensitization sets in. Which means that further alarms and alerts will not get the attention or scrutiny necessary. Thus the title of today’s blog – Monitoring Thresholds – Protectors of the Information Overload. This may not be a good example but it’s the best one I can some up with after a long week, so here goes: In the movies, you might have remembered one of those oft played scenes when someone is trying to cut the wires to an explosive device and cutting the right wire would mean life and the wrong one would mean death. That’s the best way I can describe the sensitivity behind setting thresholds. Why – because the larger the environment, the more complex and difficult it is to retain sensitivity towards the environment.
Takeaway: Equilibrium in an ideal Ops world is when you have suppressed the “noise” without suppressing a genuine incident.
* My mistake on the quote, it was not Isaac Asimov.
by admin on May 18, 2006 10:30am
Mysteries of Cygwin...
-----Original Message----- From: steffen Sent: Saturday, April 29, 2006 8:32 AM To: Port 25 Subject: (Port25) : You guys should look into _____ Importance: High
cygwin and its mysteries to bring Linux software to Windows
I am using my wife's XP machine a lot after work and hope to compile kdissert (a mindmap tool) for cygwin. It works on coLinux for me already (which you should also discuss) but I felt like not booting something extra.
My effort ended before it really started .... a file aux.h could not be untared. Google told me that this was a special problem with Windows as aux.[ch] are reserved names. This is hillarious.
Pleeze ... fix this behaviour and ... give me kdissert.
This is a common problem when porting applications to a Win32 as AUX is one of the few reserved filenames. Other reserved files, regardless of extension, are AUX, LCOMn, CON, LPTn, NUL, and PRN. (a lower case n represents a digit, so LPT1 or LCOM2 would be reserved names) The only good workaround is to rename the reserved filenames.
I took a quick look at the kdissert source and found aux.h is only included in 22 files. I would suggest renaming aux.h to something else like kdissert_aux.h and either manually editing the source files or use sed (or enter your stream editor of choice) to make it a little less painful.
Great you say, buy how do you rename or extract a file from an archive if Windows won't let you create it in the first place? Tar just hangs when it gets to the aux.h file from kdissert-1.0.6pre3.bz2. The easiest solution would be to rename and modify the source on a separate Linux machine or VPC. However, if all you have access to is a Windows machine with Cygwin, you can still work around this problem.
Extract out the contents of aux.h to another file using tar.
$ tar jxvf kdissert-1.0.6pre3.tar.bz2 --to-stdout \ kdissert-1.0.6pre3/src/kdissert/datastruct/aux.h > kdissert_aux.h
Make sure to exclude aux.h when un-tarring so tar doesn't err out
$ tar -jxvf kdissert-1.0.6pre3.tar.bz2 --exclude \ kdissert-1.0.6pre3/src/kdissert/datastruct/aux.h
Copy kdissert_aux.h to the correct place
$ cp kdissert_aux.h kdissert1.0.6pre3/src/kdissert/datastruct/kdissert_aux.h
Modify the source files to use the newly named kdissert_aux.h.
This should at least get you started towards porting kdissert to Win32/Cygwin. You also might want to check out and keep an eye on KDE's native Windows development, since further development of KDE on Cygwin has stopped.
Also see: http://www.microsoft.com/technet/interopmigration/unix/sfu/portappc.mspx
KDE on Cygwin http://kde-cygwin.sourceforge.net/ http://mail.kde.org/pipermail/kde-cygwin/2005-September/003009.html
Development of native KDE on Windows http://sourceforge.net/forum/forum.php?forum_id=507276
by admin on May 31, 2006 12:28pm
But Then Face to Face I’ve spent many hours over the past few days combing through the comments on Port 25, and the comments about Port 25 on other sites (blogs, industry news, etc.). I was struck by the mix of hope and suspicion. When you don’t know who you’re dealing with, suspicion is a natural result, compounded in this case by years of mistrust of Microsoft’s motives. I realized while combing through the posts and discussions that I never took the time to introduce myself.
I have been a science geek for as long as I can remember – not great for one’s social life but at least I can say I’ve been kicked out of Chemistry class for discussing quantum physics in the back row…
I’ve worked in Silicon Valley for 12 years, and before that studied and worked in San Diego. My degree is in Cognitive Science from UCSD (a combination of neurobiology, artificial intelligence, and cognitive psychology, founded by the great Don Norman). In the Valley I worked as a software engineer for several years, building desktop, distributed, and web applications in C++ and Java, including DCOM and J2EE. I worked at a couple of normal software companies, and worked crazy hours at 5 startups.
My best times in software engineering were at Ofoto (now Kodak) where I ran the web development and middleware engineering teams. We built a highly scalable photo service using Tomcat and Jakarta on Solaris and Linux, built our own Java-based persistence layer, and used XML for internal and external integration – old hat now, but this was in 2000.
After Ofoto, I went to BEA Systems as Web Services Principal Architect. I got the chance to work with the WebLogic Workshop team (aka Crossgain, the high-profile defection of Tod Nielsen and Adam Bosworth from Microsoft) and customers like Merrill Lynch on “web services architectures” – no one called it SOA back then. Workshop became open-sourced as Apache Beehive under the leadership of Carl Sjogreen (now a product manager at Google). In late 2002, I joined the WebLogic Integration team to do technical market development and product strategy. I learned a lot about software strategy and got hooked on the web services management concept, convinced the product management team to build Quicksilver, which eventually became the AquaLogic Service Bus.
In 2004 we started seeing real impact on the core business from JBoss, and my GM Chet Kapoor wanted BEA to get into the open source software game. Top management didn’t like the idea, and shortly after Tod Nielsen left the company, a few dozen directors and VPs followed his lead, as did Chet. Shortly after I’d joined Microsoft, Chet became the CEO of Gluecode and asked me to join. I couldn’t see how their business model could be defended, and stayed at Microsoft. Six months later, Chet sold the company to IBM and became VP of IBM’s Open Source group. I admit that I kicked myself.
I joined Microsoft originally to work directly with startups, in the hope that I could have a positive impact on people pouring their hearts and minds into risky technology bets. After batting 1 for 5 I know how tough it can be. Microsoft generates nearly all its revenue from partners (96%) but gets hammered on lack of innovation, so this seemed like a good fit. In Dan’l Lewin’s group in Silicon Valley (Mountain View) I got to meet and help a number of different startups. During this time, I saw advantages of open source economics for some of these companies, especially in SaaS. It was clear to me that something had to change in our licensing and pricing – two very challenging things to shift. I spent several months advocating within the company for change, with good results.
When Bill Hilf offered me the chance to join the Open Source Software Lab, I jumped at the chance. Open source is a pivot for the software industry at large as well as for Microsoft. I’m very curious to understand the breadth and depth of technologies available in open source, and deeply committed to driving interoperability between open source and Microsoft technologies.
This is longer than I’d intended, so I’ll stop here. For those who want more background, my blog is at http://samus.typepad.com, and I have to point out an interaction I had with Matt Asay, a smart and outspoken leader in the open source community.
PS: In my first post (“Why is it called Computer Science”) one reader pointed out the similarity of the topic to Paul Graham’s brilliant essay “Hackers and Painters”. While the point was made with a degree of suspicion, I’m grateful to the poster for leading me to Paul’s essay. Thanks, cblazek.
by admin on May 25, 2006 12:56pm
The Future of IdMU, help us help you...
I would like to thank everyone who posted comments in my Identity Management for UNIX intro web session. While I am keen on getting your feedback on Windows 2003 R2 and Longhorn Beta releases, I am also interested in getting your views on the direction you feel that this product should take for future releases. I have received good feedback so far on topics such as: expanding IdMU feature-set to support authentication over LDAP, and providing a Kerberos based solution that knits well with AD.
I would like to hear more ideas and request your opinion on what direction you feel IdMU should take next.
Please take a moment to comment below or submit mail to email@example.com with the subject: IdMU Ideas. I will be responding to comments and email and look forward to a productive discussion.
by admin on May 05, 2006 03:16pm
Sam interviews Shamit Patel, Project Manager for Identity Management for Unix (IdMU), to discuss the current state of the project and solicit feedback from the Port 25 community to help shape future versions of the project.
Format: wmv Duration: 17:56
Want to learn more? Check out Interop Systems: UNIX Tools for Windows - another great community site for UNIX users, sysadmin's and developers who find themselves needing to interoperate with Windows and Linux systems.
by admin on May 30, 2006 11:53am
‘Alone Together’ and a little economics
Time for some confessions. I’m fully addicted to World of Warcraft (abbreviate WoW). There, I said it. I’ve been a long time gamer, particularly Massively Multiplayer Online Games (MMOGs) and Real Time Strategy oriented games (I’ll take you in Starcraft *any* day of the week), but it’s been a while since a game has been able to interject heavily into my life. And although I balance fairly well with Call of Duty 2 on my Xbox 360, WoW is a great game that has me hooked.
I have a relatively busy life, so video game playing is usually relegated to the wee hours, typically after 11pm (I’ve never been a big sleeper), and I’ve even played at 35,000 feet on a wireless/satellite connection while flying on an SAS flight from Amsterdam to Seattle (yes, it’s that addictive). To make matters worse (or better?), when I’m not playing I often like to read about video game theory, particularly papers that research the sociological aspects of multiplayer games. I just finished what I think may be the best paper I’ve read thus far on WoW.
The paper, titled “Alone Together? Exploring the Social Dynamics of Massively Multiplayer Online Games”, from researchers at Xerox Parc and Stanford’s Virtual Human Interaction Lab, is a fascinating look at the sociological aspects of WoW, with some particularly insightful analysis into what really makes the game ‘work’ and succeed. What sold me on this analysis were their research methods: from the launch of WoW (Nov. 2004) the researchers were active WoW gamers and wrote some software extensions to the client side UI to capture interesting game statistics (every 5-15 minutes) while they played. So this study is based on data obtained from the game itself, versus interviews or surveys. This is what I call research! And you can tell just from reading this paper that these guy are real WoW players, not ‘observers’ from the outside.
I won’t recap the entire paper, but I will highlight some of their findings as I think it’s relevant for this forum of discussion on Port 25. Albeit a highly social virtual environment, the authors find that many of WoW’s players, play alone (thus the quote ‘Alone Together’), and their results discuss a different type of ‘social factor’ existing in WoW. The importance of an audience, a social presence and a spectacle are the three factors that they find explain the appeal of being ‘alone together’ in multiplayer games like WoW. In short, the authors tap the issues of what in film theory is often called the voyeur phenomenon – where the audience/viewer enjoys the ability to look inside someone else’s life. When movies truly succeed at ‘suspending disbelief’ it is often because the filmmaker has succeeded at creating a suspension in the viewers mind, convincing them that they truly are the ‘voyeur’ of the action onscreen. Of course, what interactive multiplayer gaming adds to this dimension is the ability to be both viewer and subject of voyeurism, which creates the appeal of ‘watching’ and wanting to ‘be’ the level 60 Night Elf rouge displaying their/your prowess in front of others. Here’s Sam, one of our own OSS lab engineers, showing his stealth-iness by sneaking into Undercity.
Name: Jbauer Race: Night Elf Class: Rogue Level: 60 Server: Frostwolf Guild: Dark Front
This phenomenon, as the paper identified, perpetuates both game play and game satisfaction (i.e., the ‘alone together’ trend).
So how does this relate to Port 25? A point raised at the end of the paper is the need for social navigation tools, to better understand certain dynamics in games like WoW (such as guild cohesion or churn rate). When reading this paper, I couldn’t help but think how this type of research in social dynamics might be applied to software development communities. Granted, when working on code, there are different dynamics than battling basilisks, but there are many principles and characteristics that are very similar – network-based, remote communication, level based grouping and different dynamics of interaction based on level and participation, satisfaction for continued participation in a group, etc. I think it would be interesting to look at possible correlations between these two social networks (MMOG’s and software communities) as it’s often at unusual intersects that we find meaningful patterns. If, nothing else, it will likely result in some useful social navigation and analysis tools.
Lastly, I’ve been doing a bit of flying lately so I’ve catching up on my books to read. I just finished David Warsh’s “Knowledge and the Wealth of Nations.” In particular, in the second half of the book, Paul Romer of Stanford provides a very solid look at the economic issues that direct technological growth. I would recommend reading the section on the costs of ‘idea’ inventions and expected growth and returns. Many business books I read these days fail to recognize some of the essential economic principles Romer investigates – I hope to blog more on this soon.
by admin on May 17, 2006 02:05pm
VC Summit 2006
Last week I attended the Microsoft VC Summit at our Silicon Valley campus. Before Microsoft and IBM, I helped to build four start-ups, three in the Bay area, so spending a day with a couple hundred VC folks talking about industry trends and business models and in general networking with some great people, was a lot of fun. I did a Q&A onstage with Scott Sandell from NEA on Microsoft, Open Source, our strategy and the relationships to the venture capital community. One of the questions that we spent a good deal of time discussing was the impact of open source software to the venture community. It was an interesting discussion about defining and measuring a successful open source software company. In my opinion, many of these companies are either evolving or starting out with business models that incorporate open source ‘components’ with commercial components (Greenplum is a good example of this), largely because selling support and services for non-differentiated commodity software is not proving to be a sustainable revenue generating model for most of the commercial OSS companies.
Steven Weber’s ‘The Success of Open Source’ discusses this in detail, and Stephen Walli and Matt Asay have been blog-debating this recently as well. I’m interested in business models and I’m interested in analyzing the history of business models. I think one aspect that is often left out of this discussion is that some of these OSS companies have been around for a while, so there is a reasonable history to look back at and measure. Red Hat and MySQL were both founded around 1995 (if memory serves me), and many others can be tracked back six, seven plus years as well. So the question that was discussed at the VC Summit, as well as just this week in the blogosphere, is what qualifies a commercially successful OSS company, and (importantly for investors) how do they rank comparatively to other commercial software companies that VCs may be considering as a potential portfolio company? These venture specific conversations were, of course, very much focused in benchmarking revenue and profitability. I talk a lot about the evolution of commercial and open source models and I think this type of analysis will influence the evolution as vendors, customers, and the investment community start to take a realistic look at the pros and cons of the model. For more information on the VC summit, Don Dodge has a good summary here.
by admin on May 24, 2006 01:57pm
I'm reading Open Sources 2.0 right now. It's a well-composed book of short essays by founders and luminaries of the Open Source movement - people like Chris DiBona, Ian Murdock, Matt Asay, and Danese Cooper, to name just a few.
So far I've read essays by Mitchell Baker (Mozilla), Chris DiBona (Google/Slashdot), Jeremy Allison (Samba), Ben Laurie (Apache), and Michael Olson (Sleepycat). They are all well-written and insightful. The most consistent conclusion that the essays I've read so far is that where development is concerned, Open Source development is not that different from commercial software development - similar (although usually more rapid) lifecycles, requirements and bug tracking. Key differences that the various authors cite are greater passion and willingness by open source developers to go beyond "working hours" to solve problems, and the general lack of interest in writing documentation as opposed to coding. This short summary unfortunately trivializes the excellent essays and I encourage you to buy the book and read them yourselves. I believe that Mitchell Baker's essay in particular offers the most powerful lessons for proprietary & commercial software development companies on how to adapt their practices into shipping open source software. In the Mozilla project, Mitchell was at the forefront of the wrenching practical and emotional shifts required from both AOL/Netscape management and the open source contributors to the project. Interestingly, Ben Laurie attacks the idea that "many eyes make all bugs shallow", one of the key claims about open source software quality. I myself have been a fan of this idea, and I was surprised to see him dispute it. To put his statements in context, however, Ben is specifically discussing security flaws, which he defines as being of a different class of problem from a standard "bug" or software defect. His point is that it takes deep expertise and hours of dedicated effort to find security flaws, and that most eyes cannot see them.
The most provocative essay I've read so far is by Michael Olson, who discusses the concept of a "dual licensing" model in detail. In short, dual licensing is a commercial Open Source software (COSS) approach that uses the GPL to convert full ownership of software IP into a self-sustaining open source community, while selling a proprietary license of the same source to proprietary vendors. The proprietary license grants the buyer more rights, including no reciprocity - not needing to release their own product under the GPL. This way, paying customers get the benefit of the open source product while retaining much stronger IP protection for themselves. Michael's summary of this balanced model is that the licensing & technology combination must be designed so that "Open source users experience only pleasure in their use of the software" while causing enough pain (Michael's word) to enough customers to make the business of selling proprietary licenses profitable.
This comes to mind most strongly to me because of some of the debates I've seen in the comments on Port 25. Some readers believe that any commercialization of open source software is downright wrong, and a violation of the principles of Open Source. Other readers seem quite willing to allow developers of open source software to make a living from their work. I think this may be an irresolvable dispute - a clash of ideals between Open Source as a movement and open source as a development, marketing, and commoditization model.
by admin on May 22, 2006 12:47pm
We had a great week here because we all got to spend a lot of time out of the office meeting people from all over the world who came here to attend an event in Seattle. A lot of people I ran into had specific and pinpointed questions on various technologically perplexing scenarios they had encountered or were at the forefront of some hard questions from their customers on technology management. Something in these conversations always sticks in my head and today, I learnt some very surprising details on the "push" for automation, process as well as technology based. What I heard in these discussions reinforced the core fundamentals of Technology Management such as “never replace an expert with what you think is a good application”. Let me explain what I mean:
Let’s start by discussing the role of automation, in general, be it scripting or process based and the role it plays in the life of an IT Operations Professional. I uncovered two scenarios, the first where automation was a key in driving efficiencies, increasing reliability, predictability and lowering TCO. The second scenario elicits a situation where unnecessary automation was implemented instead of the need of genuine expertise, with an undesirable outcome of course. Scenario 1 – Successful implementation of Automation: The need for automation is almost always driven by a business need mapping back to a simple, repeatable process. Let’s say one of the tasks you’re responsible in a large environment is maintaining and updating DNS records. Take something as simple as changing a DNS record and assigning a new address to the entry. This would be a perfectly simple AND repeatable process that screams the need to automate. To put a simple script or a comprehensive tool around such a scenario would be prudent and wise as it takes the repetition out of the job, and making the overall process less error-prone, more automated and dependable
Scenario 2 – Unnecessary use of automation: The same process can only be automated only to a certain extent, after which human expertise becomes critical. Continuing the path of the process in Scenario 1, once the DNS records get changed, it’s very easy to setup an additional script or automation around “validation”. By validation I mean double-checking the change to make sure that the prior and successive entries are accurate leaving no room for any error. Why is validation necessary – well, because once the DNS records are changed and are propagated through the environment, an incorrect entry can wreak havoc and make the busiest server unresponsive to any name resolution request. However, having a resident analyst who can validate all entries of the request, check the addresses manually before entering them into the script/tool and doing post-change validation only helps eliminates the scope of an “outage” , the one word every Operations Professional dreads
In conclusion, I’d like to say that our goal as contributors to Port25 is to always try to put forward instances and knowledge that help the IT Community run their environment/s. Therefore, if there is something specific, be it technology or operations methodology that you'd like us to dig deeper into, your ideas are ALWAYS most welcome.
Thank you all, have a great week ahead !!
by admin on May 10, 2006 07:03pm Share your interop story...
...and find fame and fortune! Maybe not, but we will share your story with the World! (At least that part of the world that visits Port 25...)
We're looking for unique and creative ways that you, the visitors to Port 25, have solved interoperability challenges. Whether you've done something creative in your own mixed environment or you've helped a customer overcome a technology challenge, we want to hear from you. Monthly we'll showcase the most unique and challenging stories submitted by individuals, integrators and ISV's on Port 25.
Submissions should include:
We'll work with the project submitter to determine the best way to highlight your success on the site. If your project is chosen we'll send you a Port 25 software care package.
Our first submission deadline will be May 25th at 12:00 am PDT.
Please send submissions to: firstname.lastname@example.org with the subject: "Interop Challenge".
by admin on May 08, 2006 08:51pm
Feedback loops are critical. One of my favorite experiences working with Open Source Software came in 1999 when I was working at eToys.com. We used Apache and mod_perl heavily, and we also did some interesting proxy/caching work with mod_proxy. At the time, since we were one of a very few high volume web sites doing this type of work with Apache and mod_proxy, we found a variety of issues. One of the issues we found was a bug (in an unusual code path) that allowed a users session id to be cached in the Apache server and then accessed by a different user. Unintentional sharing of sessions on an ecommerce site is typically not a good thing. So we wrote a very simple and short bit of code to fix this and submitted it to the mod_proxy maintainer. It was accepted and we got our first ‘+1’ open source commit – and felt like minor champions that day. However, the developer discussion email list started percolating with complaints that this change broke a few other Apache+mod_proxy configurations on different platforms than I had been testing and running on (eToys.com was largely Linux based). So we quickly (within minutes) saw emails on the list and some direct to me saying the patch busted some of their Apache configurations on Solaris, HP-UX or BeOS or whatever it was I wasn’t testing on. I was a little surprised, but also amazed at the immediacy of the feedback. Then I had a sinking feeling – what else might I have broken, but haven’t yet received feedback on? Knowing that I was not that good of a programmer, worry led to fret and fret led to genuine stress. In discussions with the package maintainer, we agreed to back the patch out. Although a little disappointed it made me realize the power of the community feedback loop, but also the ad hoc nature of how open source software was tested, basically by other developers on whatever they may be running.
In some ways this is good and helps drive the Darwinian nature of the bigger OSS projects. In some ways this is scary because it is really an inconsistent testing model. Andrew Morton recently discussed this issue about Linux kernel testing. So how do you tell when the community feedback loop is significant enough (either community size, developer talent, frequency, methodology, tools, etc.) to represent enough critical mass that your OSS project will be used and vetted broadly enough to represent some degree of satisfactory testing coverage? I have some ideas, but I want to open this one up for discussion: what tactics, strategies or even instincts do you use to assess the quality of an OSS project?*
* Also, I’m quite familiar with the various OSS maturity model projects out there. However, I’m skeptical as most of these that I’ve seen are driven by consultants who use these as tools to charge you a service for helping you to assess a given piece of OSS. This is fine, but it isn’t the question I’m asking (consultants have been doing this for years). However, there is the Carnegie Mellon West led OpenBRR project that looks interesting, but I know many of you have ideas on ways you have done this in the past – so in short I’m interested in your experiences.