Follow Us on Twitter
by admin on May 17, 2006 02:05pm
VC Summit 2006
Last week I attended the Microsoft VC Summit at our Silicon Valley campus. Before Microsoft and IBM, I helped to build four start-ups, three in the Bay area, so spending a day with a couple hundred VC folks talking about industry trends and business models and in general networking with some great people, was a lot of fun. I did a Q&A onstage with Scott Sandell from NEA on Microsoft, Open Source, our strategy and the relationships to the venture capital community. One of the questions that we spent a good deal of time discussing was the impact of open source software to the venture community. It was an interesting discussion about defining and measuring a successful open source software company. In my opinion, many of these companies are either evolving or starting out with business models that incorporate open source ‘components’ with commercial components (Greenplum is a good example of this), largely because selling support and services for non-differentiated commodity software is not proving to be a sustainable revenue generating model for most of the commercial OSS companies.
Steven Weber’s ‘The Success of Open Source’ discusses this in detail, and Stephen Walli and Matt Asay have been blog-debating this recently as well. I’m interested in business models and I’m interested in analyzing the history of business models. I think one aspect that is often left out of this discussion is that some of these OSS companies have been around for a while, so there is a reasonable history to look back at and measure. Red Hat and MySQL were both founded around 1995 (if memory serves me), and many others can be tracked back six, seven plus years as well. So the question that was discussed at the VC Summit, as well as just this week in the blogosphere, is what qualifies a commercially successful OSS company, and (importantly for investors) how do they rank comparatively to other commercial software companies that VCs may be considering as a potential portfolio company? These venture specific conversations were, of course, very much focused in benchmarking revenue and profitability. I talk a lot about the evolution of commercial and open source models and I think this type of analysis will influence the evolution as vendors, customers, and the investment community start to take a realistic look at the pros and cons of the model. For more information on the VC summit, Don Dodge has a good summary here.
by MichaelF on January 29, 2007 04:15pm
In December, Jamie posted a call for questions in the spirit of Festivus one of our favorite secular non-mainstream holidays (aye, we be talkin like pirates on September 19th too matey). Here is the result, or at least the first part of it.
We didn't get to all of the questions in this first pass, but we will be posting the continuation of this conversation early next week. Let us know what you think, if you enjoyed this we'll be happy to do it more regularly.
by jcannon on December 13, 2006 06:29pm It’s been an interesting nine months on Port 25. For those keeping track, the endeavors of our lab have taken us to Portland, New York, California, Thailand, Boston and more. We’ve had the chance to speak to some leading minds in the free and commercial open source world, including Eric Allman, Andi Gutmans, Tim O’Reilly, Matt Asay, Miguel de Icaza, among others. And there’s more to come. So we thought, at this time of year, it was time for a pause – a moment of examination - to try something different.
by jcannon on December 13, 2006 06:29pm
It’s been an interesting nine months on Port 25. For those keeping track, the endeavors of our lab have taken us to Portland, New York, California, Thailand, Boston and more. We’ve had the chance to speak to some leading minds in the free and commercial open source world, including Eric Allman, Andi Gutmans, Tim O’Reilly, Matt Asay, Miguel de Icaza, among others. And there’s more to come. So we thought, at this time of year, it was time for a pause – a moment of examination - to try something different.
So here’s the idea. While we’ve had the fortunate opportunity to talk to many provocative folks across the globe that have been very generous with their time and knowledge, we’ve yet to turn the camera on ourselves and let you ask the questions. So let’s do exactly that. …We’ll take user-submitted questions (unedited), compile them, and then go around the table with the staff of the Open Source Software Lab to get the answers. Don’t hold back, feel free to air grievances (by grievances, I mean tough questions), or challenging technical issues you’re working on. We’ll try our best to address the most challenging, and most common, submissions. And given the often fiery tone on Port 25, there’s only one guiding principle to be smart about: questions of a derogatory, legal or unprofessional tone will likely be ignored. Otherwise, the ball is in your court to pose whatever Linux, Windows or OSS-related question that’s buzzing in your brain. Use the comments below to post your question (in the interest of total transparency), or if you prefer, you can submit a question via e-mail. We’ll take the top 7-10 questions and get Sam, Kishi, Bryan, Hank and Anandeep all together right after the New Year, and tape a roundtable discussion of the Q&A session. We’ll post the resulting conversation on Port 25, in totality, afterwards. If it’s a productive discussion, we can schedule more – or even think about a live Town Hall chat with more folks from across Microsoft. The tone of the conversation is really up to you.
Looking forward to hearing your questions ~ have a merry Festivus :)
PS. It may help to keep in mind the backgrounds of our lab staff – ie, it’s unlikely we can answer questions related to nuclear physics (that I’m aware of – Sam might have a few tricks up his sleeve).
by jcannon on October 10, 2008 06:11pm
When I began my journey with technology, it was with a passion for the web. Living off a friend’s T1 line, I was hacking together HTML when Mosaic was the only show in town. I’m returning to that love of the web next week where I’ll be moving to a new position in Product Management for Windows Live social networking.
To the community: I can’t tell you how much I’ve enjoyed learning, listening and working with you. I’ll take your wisdom with me & promise to carry the “open” flag with me wherever I go within Microsoft. You’re in good hands: effective immediately, Peter Galli, will be taking over as Open Source Community Manager on Port 25. Things have been quiet on Port 25, and Peter has great plans to shake things up …but more on that later.
See you around, -Jamie
by Peter Galli on May 20, 2009 03:05pm
Tony Hey, the corporate vice president of Microsoft Research's External Research group, used the Open Repositories Conference to announce today the public availability of Zentity and the second version of the Article Authoring Add-in for Word 2007, both of which will be released as open source.
Zentity, previously called Research-Output Repository Platform and code-named Famulus, is a platform that allows institutions to store all of their digital scholarship: papers, lecture, presentations, videos-anything that might be collected by the university as part of the digital output of their researchers and scholars.
Over the past nine months two betas for this have been released, which refreshed the user interfaces and added new controls, and complement the services provided in the package.
The second version of the Article Authoring Add-in for Word 2007 includes new functionality, including the ability to upload directly into a repository - Microsoft's or those of others - via the SWORD [Simple Web Operation for Repository Deposit] protocol.
Support for authoring Object Reuse and Exchange resource maps within the Word environment has also been added, as well as the ability to perform literature searches and to import the bibliographic information in Word with one click, which makes it very simple to quickly add citations into a paper.
A key element of the Microsoft External Research vision is to support the scholarly communications lifecycle with software and services so that data and information flow in a coordinated and seamless fashion.
With regard to the plan to open source these tools, Lee Dirks, the director of the Education and Scholarly Communication team, said that "first and foremost, we're releasing the binaries, but soon thereafter, we'll release both of these as open source. Once they are available, our big push over the next 12 to 18 months will be to build a worldwide community around these assets."
You can find a lot more information on these announcements here.
These moves also follow the March release by Microsoft and the Creative Commons of an add-in for Microsoft Word 2007 that enables authors to easily insert scientific hyperlinks or ontologies as semantic annotations to their documents and research papers.
Microsoft is also making the source code available for the Creative Commons Add-in for Word 2007 free of charge to open source communities on CodePlex through the OSI-approved Microsoft Public License, which lets developers tailor it for specific industries using domain-specific language.
by admin on June 13, 2006 03:29pm
FreedomHEC trip report On Friday the 26th of April I attended the FreedomHEC Un-Conference (Yeah I am late with posting it). This was a two day conference which was held on the 26th and 27th. I only attended the first day. The FreedomHEC Unconference was billed as:
The hardware unconference where you'll learn how easy it is to make your hardware compatible with free, open source operating systems such as Linux, and available to new markets such as servers, next-generation entertainment devices, and more. Get answers on everything from kernel data structures to the fine points of licensing. Discover how participating in the Linux process is fast and simple, how the development process works, and where to get started.
The hardware unconference where you'll learn how easy it is to make your hardware compatible with free, open source operating systems such as Linux, and available to new markets such as servers, next-generation entertainment devices, and more.
Get answers on everything from kernel data structures to the fine points of licensing. Discover how participating in the Linux process is fast and simple, how the development process works, and where to get started.
It did not completely achieve this goal, but was very helpful to people who have never done device driver development for the Linux Kernel.
It was set up by Don Marti, and was attended by approximately 35 people of which 8 or so were current kernel maintainers. I think the enthusiasts outnumbered those people who would be writing company device drivers by 2 to 1.
The conference started with a suggestion session as to what people wanted to see so that a calendar could be created. As a result the first day calendar became this;
1. SYSFS OVERVIEW
2. LINUX SOCIAL ENGINEERING
3. SCSI Q AND A
5. GETTING DRIVERS INTO THE KERNEL Q&A
Below are more details on the conference. My apologies if this at times is confusing, but I am working off my notes here...
1. SYSFS OVERVIEW
This was an overview given by Greg Kroah-Hartman, who is the current Linux PCI tree maintainer (Among others he also does sysfs, kobject, debugfs and kref code). He works for SuSE Labs at Novell.
He presented a high level walkthrough of the sysfs subsystem (the /sys area) This is only available in the 2.6 kernels and is still evolving.
Sysfs (/sys) is a RAM file system, so anything created in there by humans after boot will be lost after any subsequent boots.
-sysfs shows all of the devices (virtual and real) and their inter-connectedness
In the past /proc has been used but /proc should be used for processes and not device drivers. /proc is the older method of creating device driver configuration files.
One of the reasons that /sys exists is to standardize configurations of various device drivers. /proc has been used/misused in the past for configuration representations/files/atributes, the data under this file system are different from programmer to programmer and device to device and are very hard to interpret unless you know the format.
/sys changes all that. It provides a standard method of defining and using device driver attributes. It is based off the principle of one value per file. And this value can only be a simple value, no histograms or large binary configurations (debugfs is specifically for this purpose!). Still people seem to break these rules at times.
Do a tree under /sys to see what the structure is, and cat the files for the values.
Power management used to turn things on or off, is not fully working yet. USB is getting there for power management; the rest, not yet. Put in a USB Pen drive for example and see /sys/block change real time (/sys/block/sda). If you unplug a device, udev will take care of cleaning up. You do not have to un-mount or do anything else. (If you do that to a drive you are writing to, of course you are on your own) You can also mount things by label easily.
If a change is made to a device attribute in /sys, an event is triggered that programs like hal can get real time. General guideline, if you write a user space program/driver, tie it into hal.
Also, /dev is now a RAM file system in 2.6 (Some distro’s might not have implemented this)
If you want to add proprietary drivers, they will live in /lib/usdev/devices, this is when they are not sysfs aware. This location is persistent.
2. LINUX SOCIAL ENGINEERING
This presentation was given by Randy Dunlop, A past USB and Kernel Janitor and maintainer.
The presentation was given to give people an overview of do’s and don’ts when they start submitting code to the kernel. A lot of the information given was common sense but a lot of people do not follow the rules. (e.g Rules so Obvious that they are not followed)
The presentation was very much in Bullet form and so are my notes on this, so they might not flow as well as they could.
Massive amounts of open communication via email etc.
LINUX DEVELOPMENT VALUES
THINGS TO AVOID
WHEN NEW INFRASTRUCUTRE IS NEEDED
DRIVERS FOR NEW HARDWARE
NEW DRIVER DEV
MAILING LIST etiquette
In all the presentation was a crash course for people who have never done collaborative development, and tried to prepare people what is ahead and what kind of commitment will be needed to be part of the Linux kernel development.
3. SCSI Q AND A
Led by James Bottomley SCSI Subsystem maintainer.
I do not have many notes on this part because not much was said to keep notes on, a few things that I did write are as follows (Second item is of great interest)
If you want to get an idea of how SCSI drivers work, check out the 53c700 which is an excellent driver to learn from. It is older but continuously maintained.
There is kind of a new concept called a Target driver. Target drivers make it appear to the outside world as if it is a driver (A virtual driver more or less), but instead connects to the actual physical device drivers. Allowing, for example, to turn a Linux box into a RAID device. Then you can talk to the target driver as if it is one device. Not many of them out there yet, but there is one from IBM which James did not think was a very good. And there are a few on the way. There are two people (names escape me currently) that are actively developing them. In my opinion this is a pretty interesting concept that allows you to do a whole bunch of cool things. (e.g Network routers, Cheap raid devices etc)
You can (should?) write SCSI drivers in user space.
A lighter presentation was given by two students from Portland state. They are using Open source hardware and software to build rockets. Some interesting reads can be found on that project here:
5. GETTING DRIVERS INTO THE KERNEL Q&A
This was pretty much done by Greg.
Some source code control software of choice of the Kernel Maintainers
Testing Kernel code is hard, but crashme is used by the maintainers be used for (stress) testing kernel stuff.
Greg was very adamant that they take any device driver. No matter what HW it uses, as long as you or somebody else is willing to maintain it they will take it. Old/New it does not matter. Even if there is only one user for it they will take it.
That’s all folks.
by jcannon on July 23, 2006 08:15pm
We’ve rolled out additional updates to the site this past Friday in response to feedback, and created a new section of the site: Forums.
You’ll also notice:
We are behind on providing transcripts for all of the interviews, but this work is still under way. It’s important to us both for accessibility and to make it easier for non-English speakers (enabling machine translation). We got a suggestion to try using Amazon’s Mechanical Turk for transcription but we haven’t tried that yet ;)
Finally, I hope that the podcasts are useful. I enjoyed learning about Martin Woodward’s work on an open source, cross-platform (Eclipse-based) integration to Team Foundation Server.
Hope you have a great week, Sam
by jcannon on July 06, 2006 05:42pm
Linux Format reported on Port 25 recently with the tagline “Reports of snowballs seen in hell as Microsoft offers to work with Linux developers,” which I thought was funny. It’s apparently getting even colder down there as we’ve now announced an open source project that adds support for ODF to Microsoft Word 2007 ("Microsoft Expands Document Interoperability").
A few months ago I started working with Jean Paoli, whose leadership on Interoperability at Microsoft is steadily moving product teams toward the goal of consistently delivering high-quality interop. Brian Jones notes this in his blog but doesn’t call out Jean by name. You can be sure that you’ll see more of Jean’s handiwork in the coming months and years.
During the time I’ve worked with him I’ve been greatly encouraged by his commitment to openness in documentation and in implementation. The Open XML Translator project is a great example of this – it’s an open source project hosted on Sourceforge.
I couldn’t help but hop over to Slashdot and check out the reactions to the news – and as usual there was a mixture of the rational and irrational, hope and fear, insight and suspicion of conspiracy. It’s worth making one point over and over.
The Open XML Translator is an Open Source project. The Open XML Translator is an Open Source project. The Open XML Translator is an Open Source project.
By definition it can’t conceal its implementation, is open to experimentation, modification, and commercialization (it uses a BSD license), and is owned by the community.
If you think it needs improvement, then improve it. If you think it doesn’t matter, ignore it. But above all, really think about it and what it means that we’ve taken this step before reacting reflexively.
This is actually something new and different.
by MichaelF on February 15, 2007 06:29pm
Two updates this week but I also have some bigger news to report:
Starting February 10th, any registered user can start their own Codeplex Project! Check out the details here: http://www.codeplex.com/Wiki/View.aspx?ProjectName=CodePlex
by jcannon on December 21, 2006 11:41am
Just a quick note & pointer to Paul Thurrott's interview with Sam on Open Source, the Lab and why these intiatives are so important to Microsoft and to our customers. The interview was conducted back in October, but the podcast just went live at Windows IT Pro.
by admin on May 24, 2006 01:57pm
I'm reading Open Sources 2.0 right now. It's a well-composed book of short essays by founders and luminaries of the Open Source movement - people like Chris DiBona, Ian Murdock, Matt Asay, and Danese Cooper, to name just a few.
So far I've read essays by Mitchell Baker (Mozilla), Chris DiBona (Google/Slashdot), Jeremy Allison (Samba), Ben Laurie (Apache), and Michael Olson (Sleepycat). They are all well-written and insightful. The most consistent conclusion that the essays I've read so far is that where development is concerned, Open Source development is not that different from commercial software development - similar (although usually more rapid) lifecycles, requirements and bug tracking. Key differences that the various authors cite are greater passion and willingness by open source developers to go beyond "working hours" to solve problems, and the general lack of interest in writing documentation as opposed to coding. This short summary unfortunately trivializes the excellent essays and I encourage you to buy the book and read them yourselves. I believe that Mitchell Baker's essay in particular offers the most powerful lessons for proprietary & commercial software development companies on how to adapt their practices into shipping open source software. In the Mozilla project, Mitchell was at the forefront of the wrenching practical and emotional shifts required from both AOL/Netscape management and the open source contributors to the project. Interestingly, Ben Laurie attacks the idea that "many eyes make all bugs shallow", one of the key claims about open source software quality. I myself have been a fan of this idea, and I was surprised to see him dispute it. To put his statements in context, however, Ben is specifically discussing security flaws, which he defines as being of a different class of problem from a standard "bug" or software defect. His point is that it takes deep expertise and hours of dedicated effort to find security flaws, and that most eyes cannot see them.
The most provocative essay I've read so far is by Michael Olson, who discusses the concept of a "dual licensing" model in detail. In short, dual licensing is a commercial Open Source software (COSS) approach that uses the GPL to convert full ownership of software IP into a self-sustaining open source community, while selling a proprietary license of the same source to proprietary vendors. The proprietary license grants the buyer more rights, including no reciprocity - not needing to release their own product under the GPL. This way, paying customers get the benefit of the open source product while retaining much stronger IP protection for themselves. Michael's summary of this balanced model is that the licensing & technology combination must be designed so that "Open source users experience only pleasure in their use of the software" while causing enough pain (Michael's word) to enough customers to make the business of selling proprietary licenses profitable.
This comes to mind most strongly to me because of some of the debates I've seen in the comments on Port 25. Some readers believe that any commercialization of open source software is downright wrong, and a violation of the principles of Open Source. Other readers seem quite willing to allow developers of open source software to make a living from their work. I think this may be an irresolvable dispute - a clash of ideals between Open Source as a movement and open source as a development, marketing, and commoditization model.
by Peter Galli on April 15, 2009 05:54pm
I noticed today that my colleague Jeff Jones in the security group is launching a metric project that appears to be leveraging some of the good bits of open techniques.
I touched base with him briefly and he gave me a little more information about Project Quant, which is being undertaken along with Securosis, an independent security research firm.
Project Quant will be working on the metrics of patch management and is as much an experiment of a new research process as it is one of security metrics, said Securosis founder Rich Mogull in a blog post.
"For this project Jeff wanted to be involved, but also asked for an open, unbiased model that will be useful to community-at-large (in other words, he didn't ask for a sales tool). Rather than us developing something back at the metrics lab, Jeff asked us to lead an open community project with as much involvement from the different corners of the industry as possible," Mogull said.
While he also acknowledged that it is risky for Securosis to allow direct involvement of the sponsor, the company is hoping that the process works the way it thinks it will and which also happens to match Microsoft's project goals.
So, this is what's expected to happen: a project landing site has been set up at Securosis that will contain all material and research as it is developed; every piece of research will be posted for public comment and no comments will be filtered unless they are spam, totally off topic, or personal insults.
All significant contributors will also be acknowledged in the final report, although there will be no financial compensation for contributors and the project itself will retain ownership rights. All material will also be released under a Creative Commons license, with spreadsheets released in both Excel and open formats.
"In short, we are developing all research out in the open, soliciting community involvement at every stage, making all the materials public, acknowledging contributors, and eventually releasing the final results for free and public use. The end goal of the project is to deliver a metrics model for patch management response to help organizations assess their costs, optimize their process, and achieve their business goals. Let us know what you think, even if you think we're just full of it," Mogull said.
For his part, Jones told me that while he has been zealous in past reports about using repeatable methodologies, pointing to his source of public data, and outlining his assumptions step-by-step, he would like to take transparency one step further by developing models and methodologies first, in an open and transparent manner, so that everyone can agree on the pros and cons before the methodologies are applied.
"I think being completely open and transparent will help credibility since, similar to open source, everyone can scrutinize every step of the analysis ... creating open models and potentially getting community involvement just seems to be the right process," he says.
I plan to interview him at greater length in the next few weeks, so look for a follow-up blog then.
by admin on June 06, 2006 05:49pm
New friends from Linux World Brazil I’m recently returning from Linux World Brazil where I presented on ‘OSS and Microsoft.’ One night in Sao Paulo I had the opportunity to chat with two of the leading OSS technologists in Brazil – Cesar Brod and Helio Chessini de Castro. Cesar has an interesting background, working at Tandem from 1992-1998 and at various companies throughout Brazil, including his own consulting company. Cesar is involved with Linux International and was also one of three finalists for the Free Software Community Award in 2004. Helio is well known in Brazil as one of the key developers of Conectiva and a prolific KDE and Linux developer and instructor. He currently works for Mandriva (who acquired Conectiva to form the Mandrake+Conectiva distribution). If you’ve spent time in the Conectiva or KDE developer world, you certainly have heard of Helio. We had a great conversation about the Linux/OSS environment in Brazil, particularly the history of this community. Cesar echoed a statement I’ve heard from quite a few other OSS developers about those who ‘do’ actual technical work in the OSS community and those who ‘talk’ about it. His point was that there has always been a developer community in Brazil – before OSS hit the scene – and Cesar has tried to keep this community focused on the real work, not simply the rhetoric and politics. Cesar is a well regarded OSS participant and has some great stories about his early days at Tandem, how he discovered Linux (an alternative from flying back and forth from city to city to use a Unix machine) and how the OSS community has evolved. Both Cesar’s and Helio’s pragmatism, honesty, and open mindedness show their experience and wisdom – I hope to continue our discussions in the future. Cesar was also kind enough to introduce me to his friend John “Maddog” Hall. Maddog is well known in OSS circles and although we’ve heard about each other, this was the first time we had the opportunity to meet face to face.
(From left: Maddog, Me) Maddog’s keynote was right before mine, so I had the chance to see him present on ‘Total Value’ of software. It was interesting, and although I disagreed with some of his points (and the ‘blue screen of death’ jabs – come on, used any Microsoft software since 2003?), Maddog is a good presenter and hit some important points about standards, competition and choice that I agree strongly with. Shortly after our presentations I had the chance to talk with Maddog. We had a good conversation about change at large corporations (Maddog is a former Digital guy), and software trends. Despite the name, Maddog is a balanced industry veteran who I could talk to easily for hours, his perspectives and insight are valuable and although I think we may disagree on some points, there are many others where we find harmony. I have visited Brazil many times, and I was honored to speak at Linux World Brazil, but this may be one of the most useful of my trips as I was able to meet customers, partners, Microsoft teams, government officials and developers and thinkers in the OSS world. New friendships are always important to me, but it’s the cascade of new thoughts from all of these various discussions that keeps me awake on my flight back to Seattle. -Bill
by admin on April 27, 2006 05:27pm
Seventeenth Century Philosopher and Author Voltaire wrote “I disapprove of what you say, but I will defend to the death your right to say it." This resonates very much with my own thoughts after going through the comments and feedback from my previous blog. I want to thank everyone for providing valuable feedback, regardless of whether I agree with it or not. The principle here is not to choose a side but to put a process in place that allows for an open and honest dialogue and I am exhilarated at the results of this endeavor.
Moving on to “Consistency and Standards” – the short theme for today’s blog. One of the questions put to me recently was to share something that I might have benefited from, during my past life in IT Operations. As I had mentioned previously, I have been involved with IT Operations since 1989 in some shape or form. From a student who worked at the Academic Computing Services in Syracuse University managing Macintosh and Sun Sparc clusters all the way to the past three years when I was managing 7x24 support of Class-A Production Services like AD, DNS, WINS etc. for MSN Operations, I can vouch for the fact that consistency in implementation standards have saved the day countless number of times. The point here is, no matter what platform, toolset, operating system or application you may choose, developing standards towards consistent implementation of these will always reap into rewards towards a lower “TCO” or Total Cost of Ownership in terms of supportability.
I remember a few years ago we were battling the spread of one of the malicious worms across the Internet. We were in the middle of taking inventory of what the configuration of our Production servers providing these mission-critical Class-A services, spread all across the world looked like. We all realized that by adhering to a common toolset, standard SKU’s for Hardware as well as for the OS versions helped us reduce the deployment cycle of a patch from what had seemed like days to a matter of hours. You may ask – “Hey – how does that matter” ? Well, imagine writing a different script for each type of configuration and multiply that by the hundreds of servers spread across the world in eight different datacenters. That’s quite a chore, especially when time is not on your side, you’re facing a crisis and you only have limited number of resources you can muster up for support. You’ll also need to track success and failure of applying the patch across datacenters and monitor where additional attention is needed.
So, what does that mean? That complex environments require standards to work “for and with” IT Administrators. Admittedly,“one-off” or “out-of-standards” configs are very easy to do if we’re trying to please a group or mend fences with a specific customer. But in reality, we’re doing them (our customers) a dis-service but putting their environment in harm’s way and increasing their risks. Why - because supportability of their environment is ultimately our responsibility. So…the more colors we put on the fence, the more painters we will need and the longer it will take……
by admin on May 22, 2006 12:47pm
We had a great week here because we all got to spend a lot of time out of the office meeting people from all over the world who came here to attend an event in Seattle. A lot of people I ran into had specific and pinpointed questions on various technologically perplexing scenarios they had encountered or were at the forefront of some hard questions from their customers on technology management. Something in these conversations always sticks in my head and today, I learnt some very surprising details on the "push" for automation, process as well as technology based. What I heard in these discussions reinforced the core fundamentals of Technology Management such as “never replace an expert with what you think is a good application”. Let me explain what I mean:
Let’s start by discussing the role of automation, in general, be it scripting or process based and the role it plays in the life of an IT Operations Professional. I uncovered two scenarios, the first where automation was a key in driving efficiencies, increasing reliability, predictability and lowering TCO. The second scenario elicits a situation where unnecessary automation was implemented instead of the need of genuine expertise, with an undesirable outcome of course. Scenario 1 – Successful implementation of Automation: The need for automation is almost always driven by a business need mapping back to a simple, repeatable process. Let’s say one of the tasks you’re responsible in a large environment is maintaining and updating DNS records. Take something as simple as changing a DNS record and assigning a new address to the entry. This would be a perfectly simple AND repeatable process that screams the need to automate. To put a simple script or a comprehensive tool around such a scenario would be prudent and wise as it takes the repetition out of the job, and making the overall process less error-prone, more automated and dependable
Scenario 2 – Unnecessary use of automation: The same process can only be automated only to a certain extent, after which human expertise becomes critical. Continuing the path of the process in Scenario 1, once the DNS records get changed, it’s very easy to setup an additional script or automation around “validation”. By validation I mean double-checking the change to make sure that the prior and successive entries are accurate leaving no room for any error. Why is validation necessary – well, because once the DNS records are changed and are propagated through the environment, an incorrect entry can wreak havoc and make the busiest server unresponsive to any name resolution request. However, having a resident analyst who can validate all entries of the request, check the addresses manually before entering them into the script/tool and doing post-change validation only helps eliminates the scope of an “outage” , the one word every Operations Professional dreads
In conclusion, I’d like to say that our goal as contributors to Port25 is to always try to put forward instances and knowledge that help the IT Community run their environment/s. Therefore, if there is something specific, be it technology or operations methodology that you'd like us to dig deeper into, your ideas are ALWAYS most welcome.
Thank you all, have a great week ahead !!