Follow Us on Twitter
by jcannon on December 21, 2006 11:41am
Just a quick note & pointer to Paul Thurrott's interview with Sam on Open Source, the Lab and why these intiatives are so important to Microsoft and to our customers. The interview was conducted back in October, but the podcast just went live at Windows IT Pro.
by Frank Chism on October 20, 2006 03:06pm
"`When I use a word,' Humpty Dumpty said, in rather a scornful tone, `it means just what I choose it to mean -- neither more nor less.' "
Where’s the glory? I work in the cluster business. I can tell you that all too often I have felt like Alice trying to hold a conversation with Humpty Dumpty in Looking Glass Land. This usually occurs when I’m talking to someone new to cluster computing or someone who comes from a different tread of the industry than I do. My roots are in a thread that used number crunching to mean serious floating point arithmetic done by Fortran programs to simulate physical processes. Of course, some of the support routines and tools and even the operating system might be written on C, but Fortran ruled. Imagine my surprise when I found there was a ‘Number Crunchers Users Group’ in Seattle and they got together to discuss using spreadsheets. “Now where’s the glory in that?” I thought to myself.
Time marches on, but technology runs as fast as it can just to stay in one place. Fortunately for me, the object oriented police have provided me with just the right jargon to describe my predicament. Just consider that in any modern object oriented language it is possible that + can mean any number of things. Humpty would be proud. In OOP + means just exactly what the developer chooses it to mean. This is called overloading an operator. That may be OK for a compiler, but what about me? When I use cluster I am thinking of something that descended from the original Beowulf. No, not the King of the Geats. I mean the seminal work of those oft sung NASA nerds who put together the first Beowulf compute clusters. When I say nerds, I am here to praise cluster creators, not heap dirt on them or their work. After all, they ain’t dead yet.
For example, I work for a company that has several cluster offerings. There’s failover clusters, and load balancing scale out clusters, and my baby compute clusters. Now that’s overloading. You can usually tell what kind of cluster we mean by the type of work we talk about feeding it. If you had one type of cluster in mind and I had another and we kept talking long enough we’d either figure out the root cause of the confusion or dismiss our conversational partner as an idiot.
But wait. It gets worse. Within my own little compute centric world, two new terms have come into common usage. They are farm and grid. So how do I tell a farm from a cluster if both are eating compute intensive programs? And worse yet, how is a cluster or a farm related (or not) to a grid? I was recently told by a co-worker to not tell our customer that he had a cluster, because as far as he was concerned it was a grid. This is proof that technical correctness is not nearly as important as political correctness. As in politics, so in life.
I can’t claim to have invented farms, but I can certainly claim to be one of the first of the render farmers. I was working at an early Computer Generated Images (CGI) site that was falling behind schedule for a major (OK, it was a big deal to us) Hollywood movie. If we were to finish in time for the planned release, we needed to get our CGI effects generated at just about twice the rate we were running at on our current machine. Fortunately the little ol’ mainframe we were using, a Cray-1, had just been superseded by the Cray X/MP, which had two CPUs instead of one and each CPU was about 50% faster than the Cray-1 CPU. In an example of embarrassingly parallel render farming, we ran odd numbered frames on one thread, even numbered frames on another and ran a third thread to collate the frames and send them to the camera.
I can’t be blamed for grid at all. Well yes, some of the computers my company sold were ‘on the grid’, but I never thought of the grid as anything other than a route for users to do cool things with our machines. In fact I wasn’t sure that grid was anything other than a buzz word used to get NSF funding. Now, thanks to the efforts of the hardworking and unpaid volunteers at Wikipedia, I have at least one fixed mark to guide my wondering barking.
If a cluster on the grid failed over and no one was there to farm it, would it make any sense? So, can we all agree on one set of definitions for clusters (several flavors to be sure), farms and grids? If not, I’m sure I’ll hear from the more assertive of the Port 25 readers and perhaps we can reach a group consensus and I can start quoting the group mind of an entire community in defense of my own use of these terms without sounding too much like Humpty Dumpty making up meanings as I see fit.
Cluster: Making more than one computer behave as a single resource.
Failover or High Availability Cluster: A cluster specifically designed to perform functions in a manner that makes the service it provides continuous, even in the event of individual computer failures.
Load balancing or Scale out Cluster: Generally a high availability cluster that in addition to offering resiliency against individual computer failures also offers addition ability to deliver more of the intended service.
Compute Cluster: A cluster that is built as a single unit and treated as a single system and tuned to perform compute intensive tasks either as a capacity engine, that is to run lots of single node jobs or many low scale parallel jobs, or a capability engine, that is to run much bigger parallel jobs than a single node can accommodate.
Compute Farm: A cluster that uses a collection of computers, generally in a centralized location, to run many similar jobs in parallel for improved time to completion of a particular process. This is very similar to a Compute Cluster in capacity mode but the farm is not necessarily built to look like a single system.
Compute Grid: A heterogeneous farm that is spread out across a wider network or even the Internet but more importantly that is controlled by and conforms to the standards, concepts, and tools originating in the Global Toolkit. It can be used in both capacity and capability mode but is generally a distributed collection of resources, not a single system.
I tried to turn the handle but—
That’s all for now. I enjoyed writing this and hope to hear from some of you about what you think of my proposed definitions and how they can be improved. Other items on my blog-fodder list are ‘The Parallel Imperative’ and ‘What the Heck is Parallel I/O Anyway?’
So, never stop studying and I’ll blog at you later. - Frank
by admin on June 20, 2006 05:51pm
Not long after I blogged about “disambiguating open” as a research issue, a debate erupted on Slashdot about “How Open Does Open Source Need to Be?” Three different criteria for deciding whether something could legitimately call itself “open source” seemed to me to dominate the discussion: the level and terms of source code availability; whether or not there was a community organized around the code; and compliance (or not) with the Open Source Initiative (OSI) definition.
What struck me about this debate was, first, the fact that it was happening because “open source,” in this case, was regarded as a favorable characterization (and thus in the interest of a software producer to apply to their bits) and that favorability was rooted outside of any producer’s application of the term (in contrast, it’s unlikely such a debate would occur if one of the parties had chosen to label their product “cromulent” for example.) Put another way, the operative assumption is that calling your product “open source” piggy backs on a store of perceived value among potential customers built up by others, which may be in a technical or moral sense be “undeserved” and (at worst) might actually result in dilution of that store of value.
What I thought was missing, however, was some empirical perspective on the ways in which states of the three criteria above might in fact add to or erode that perceived store of value when the so-called (or not-so-called) “open source” product was actually consumed by customers. This reminded me of Perspectives on Open Source Software (2001) from Carnegie-Mellon’s Software Engineering Institute, where the authors do a great job of describing why the answer to “how open” software needs to be will almost always come down to “it depends...”—if we’re thinking of “need” relative to optimizing for the resulting customer experience (as opposed to the relative ease of calling something “open” for marketing purposes or conforming to a check-the-box definition.)
The report describes “open source software” as
“at the most basic level simply[meaning] software for which the source code is open and available.”
Which they define as:
“open--The source code for the software can be read (seen) and written (modified). Further, this term is meant to promote the creation and distribution of derivative works of the software…available--the source code can be acquired either free of charge or for a nominal fee”
And they contrast this with the Open Source Definition (OSD) from the Open Source Initiative (OSI) noting:
“Under OSI (strictly speaking) a software product is in fact open source if and only if it conforms to the OSD…Upon reviewing the complete text of the OSD, it is interesting to point out that the definition does not pertain specifically to the source code itself, but rather to the license under which the source code is distributed. Therefore, in strict conformance to the OSD written by the OSI, a software product that conforms to only eight of the nine criteria is not OSS.”
They raise a few points relevant to distinguishing meaningful customer requirements for the software :
The authors of the report conclude that:
OSS [is] a viable source of components from which to build systems. However, we are not saying that OSS should be chosen over other systems simply because the software is open source. Rather, like COTS [commercial off-the shelf software] and CSS [closed-source software], OSS should be selected and evaluated on its merits…Caveat emptor (let the buyer beware): the [software] product should be chosen based on the mission needs of the system and the needs of the users who will be the ultimate recipients.
To me, reading the discussion thread and then mulling over the SEI’s examples bring me right back to what gets me out of bed every (work) day: there is rich field of inquiry about what makes software successful for its developers and users. Neither any one commercial vendor nor organization has it all figured out and trademarked yet—if caveat emptor is good advice to keep in mind, salus populi suprema lex esto (“the welfare of the people shall be the supreme law “) is a good motto to live by.
(Incidentally, SEI authors Chuck Weinstock and Scott Hissam revisit some of the findings and analysis in the SEI report in a chapter in a great new book Perspectives on Free and Open Source Software (MIT Press) –I’ll blog more about the book soon.)
by jcannon on July 23, 2006 08:15pm
We’ve rolled out additional updates to the site this past Friday in response to feedback, and created a new section of the site: Forums.
You’ll also notice:
We are behind on providing transcripts for all of the interviews, but this work is still under way. It’s important to us both for accessibility and to make it easier for non-English speakers (enabling machine translation). We got a suggestion to try using Amazon’s Mechanical Turk for transcription but we haven’t tried that yet ;)
Finally, I hope that the podcasts are useful. I enjoyed learning about Martin Woodward’s work on an open source, cross-platform (Eclipse-based) integration to Team Foundation Server.
Hope you have a great week, Sam
by kishi on November 06, 2007 03:21pm
I have been working as a Senior Program Manager with the Open Source Software Lab since the fall of 2005. After spending two of the most eye-opening and fantastic years here, sadly, time has come for me to move on. I am taking on a role in a different division inside of Microsoft but having been attached to Port25 for such a long time, I didn’t want to leave without writing my parting thoughts. You see, when I started my work with the Open Source Software Lab, I had no idea who Bill Hilf was or his role at Microsoft. So when I first came to speak to him about this opportunity, I was driven purely by the job description, the first line of which read “Everything is connected”. After talking to Bill, when I came back and searched for his name/credentials on the web, needless to say, I felt like a total idiot. Here was someone, who was literally the Linux and Open Source “guy” within Microsoft and I had no clue about his background whatsoever....taught me that I should have done better homework . After going through the interview loops and meeting up w/ some sharp minds in OSSL, I was very attracted to the opportunity and came on board.
Anyway, I have had the pleasure of working with some amazing people on this time, Sam Ramji, Hank Janssen, Michael Francisco, Steve Zarkos, Tom Hanrahan, John Kew, Anandeep Pannu to name a few. In the process of understanding and learning about Linux and Open Source technologies, I also learnt a whole lot about driving change through people, technology and especially practices (Sam – Thank you!). In my two years with the OSSL, I got the opportunity to REALLY push the boundaries of conventional or deep-rooted thinking. I was able to work on my pet projects/areas of interest such as Systems Manageability and IT Operations. I spent this past summer building the Interop Lab in Cambridge, MA – something I enjoyed whole-heartedly. I got face time with thought leaders like Miguel De Icaza and rubbed shoulders with creative thinkers like Tom Hanrahan. The experience that I am walking away with is quite profound at many levels. Let me explain why: You see, this team is so unique in what it does, that it’s perhaps one of the few places which has the ability to drive change inward and outward. In my experience here, I have not only seen the ground shift beneath my feet but have also tremendous progress towards community involvement and understanding as it relates to Linux and Open Source. The wisdom I am walking away with can best be captured by something Margaret Mead wrote “Never under estimate the power of a few committed people to change the world “. I say that with the utmost passion because the intellectual horsepower, pure passion and pace that I have witnessed in this group is hard to ignore or imitate.
Some other thoughts that I am taking with me are how much effort goes into simply undoing misconceptions and misunderstandings. Working in this group and watching Bill, Sam, Hank and all these guys work – I realized how committed we are to building bridges and doing a great job of listening as well as being understood. So, after working with Open Source enthusiasts and Windows professionals side-by-side, I whole heartedly endorse something F. Scott Fitzgerald wrote a while ago “The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function“
In conclusion, I would urge the Open Source Community to really look at how far we have come in the past two years alone. Don’t take my word for it, see for yourself the work done on Port25 and http://www.microsoft.com/opensource
As always, your thoughts and comments are ALWAYS welcome…………….. Alvidaa (That’s urdu for Farewell)
by admin on June 06, 2006 05:49pm
New friends from Linux World Brazil I’m recently returning from Linux World Brazil where I presented on ‘OSS and Microsoft.’ One night in Sao Paulo I had the opportunity to chat with two of the leading OSS technologists in Brazil – Cesar Brod and Helio Chessini de Castro. Cesar has an interesting background, working at Tandem from 1992-1998 and at various companies throughout Brazil, including his own consulting company. Cesar is involved with Linux International and was also one of three finalists for the Free Software Community Award in 2004. Helio is well known in Brazil as one of the key developers of Conectiva and a prolific KDE and Linux developer and instructor. He currently works for Mandriva (who acquired Conectiva to form the Mandrake+Conectiva distribution). If you’ve spent time in the Conectiva or KDE developer world, you certainly have heard of Helio. We had a great conversation about the Linux/OSS environment in Brazil, particularly the history of this community. Cesar echoed a statement I’ve heard from quite a few other OSS developers about those who ‘do’ actual technical work in the OSS community and those who ‘talk’ about it. His point was that there has always been a developer community in Brazil – before OSS hit the scene – and Cesar has tried to keep this community focused on the real work, not simply the rhetoric and politics. Cesar is a well regarded OSS participant and has some great stories about his early days at Tandem, how he discovered Linux (an alternative from flying back and forth from city to city to use a Unix machine) and how the OSS community has evolved. Both Cesar’s and Helio’s pragmatism, honesty, and open mindedness show their experience and wisdom – I hope to continue our discussions in the future. Cesar was also kind enough to introduce me to his friend John “Maddog” Hall. Maddog is well known in OSS circles and although we’ve heard about each other, this was the first time we had the opportunity to meet face to face.
(From left: Maddog, Me) Maddog’s keynote was right before mine, so I had the chance to see him present on ‘Total Value’ of software. It was interesting, and although I disagreed with some of his points (and the ‘blue screen of death’ jabs – come on, used any Microsoft software since 2003?), Maddog is a good presenter and hit some important points about standards, competition and choice that I agree strongly with. Shortly after our presentations I had the chance to talk with Maddog. We had a good conversation about change at large corporations (Maddog is a former Digital guy), and software trends. Despite the name, Maddog is a balanced industry veteran who I could talk to easily for hours, his perspectives and insight are valuable and although I think we may disagree on some points, there are many others where we find harmony. I have visited Brazil many times, and I was honored to speak at Linux World Brazil, but this may be one of the most useful of my trips as I was able to meet customers, partners, Microsoft teams, government officials and developers and thinkers in the OSS world. New friendships are always important to me, but it’s the cascade of new thoughts from all of these various discussions that keeps me awake on my flight back to Seattle. -Bill
by jcannon on December 13, 2006 06:29pm It’s been an interesting nine months on Port 25. For those keeping track, the endeavors of our lab have taken us to Portland, New York, California, Thailand, Boston and more. We’ve had the chance to speak to some leading minds in the free and commercial open source world, including Eric Allman, Andi Gutmans, Tim O’Reilly, Matt Asay, Miguel de Icaza, among others. And there’s more to come. So we thought, at this time of year, it was time for a pause – a moment of examination - to try something different.
by jcannon on December 13, 2006 06:29pm
It’s been an interesting nine months on Port 25. For those keeping track, the endeavors of our lab have taken us to Portland, New York, California, Thailand, Boston and more. We’ve had the chance to speak to some leading minds in the free and commercial open source world, including Eric Allman, Andi Gutmans, Tim O’Reilly, Matt Asay, Miguel de Icaza, among others. And there’s more to come. So we thought, at this time of year, it was time for a pause – a moment of examination - to try something different.
So here’s the idea. While we’ve had the fortunate opportunity to talk to many provocative folks across the globe that have been very generous with their time and knowledge, we’ve yet to turn the camera on ourselves and let you ask the questions. So let’s do exactly that. …We’ll take user-submitted questions (unedited), compile them, and then go around the table with the staff of the Open Source Software Lab to get the answers. Don’t hold back, feel free to air grievances (by grievances, I mean tough questions), or challenging technical issues you’re working on. We’ll try our best to address the most challenging, and most common, submissions. And given the often fiery tone on Port 25, there’s only one guiding principle to be smart about: questions of a derogatory, legal or unprofessional tone will likely be ignored. Otherwise, the ball is in your court to pose whatever Linux, Windows or OSS-related question that’s buzzing in your brain. Use the comments below to post your question (in the interest of total transparency), or if you prefer, you can submit a question via e-mail. We’ll take the top 7-10 questions and get Sam, Kishi, Bryan, Hank and Anandeep all together right after the New Year, and tape a roundtable discussion of the Q&A session. We’ll post the resulting conversation on Port 25, in totality, afterwards. If it’s a productive discussion, we can schedule more – or even think about a live Town Hall chat with more folks from across Microsoft. The tone of the conversation is really up to you.
Looking forward to hearing your questions ~ have a merry Festivus :)
PS. It may help to keep in mind the backgrounds of our lab staff – ie, it’s unlikely we can answer questions related to nuclear physics (that I’m aware of – Sam might have a few tricks up his sleeve).
by hjanssen on December 18, 2006 02:40pm
I have finally found a way to write more blogs!!! When I am in the office I have so much work that I rarely get enough time to sit down and concentrate on a blog. When I get home (My wife tells me normally later than she wants me to) I do not always have the desire to write a blog. But I am flying for work this week and I am finding all kinds of time!
What for me the line is that epitomizes the fact that I must have turned into my parents is “When I was Young”. Yet I am finding myself starting this blog with exactly that.
First a let me describe he catalyst for this blog;
A few months ago I attended OSCON 2006, one of the sessions I went to was called ‘PHP Security Hoedown’ given by Ed Finkler (http://conferences.oreillynet.com/cs/os2006/view/e_sess/9527)
Basically, what this session was about was talking about PHP security. The session was a response to security problems people have been finding with PHP. Specifically the installations and running of PHP.
He stated that a large part of the Security problems that PHP seems to be suffering from can be summed like this (I have taken some liberty to paraphrase some of the things that where said, but check the above link to his original presentation.);
PHP has a fairly shallow learning curve. Because it is a shallow learning curve, there is a lot of variety of people that are wide in range of skill sets. Basically almost anybody can get started in PHP and get something running pretty quickly. There are really only a small percentage of top level people who could be considered ‘experts’ in the language.
PHP has a fairly shallow learning curve. Because it is a shallow learning curve, there is a lot of variety of people that are wide in range of skill sets. Basically almost anybody can get started in PHP and get something running pretty quickly.
There are really only a small percentage of top level people who could be considered ‘experts’ in the language.
So, now we are getting to the part that I warned about. ‘When I was Young’.
Many moons ago, now more than I am willing to legally admit to, I started my career with Philips/AT&T who at the time had a joint venture, they developed very complex digital telephone switches. The 5ESS line. This was a very sophisticated telephone system that was almost completely written in C.
When I started my programming career with AT&T (Now over 20 years ago) you had to go through a lengthy process of learning the language C. Carrier grade software was and still is of very complex nature. Since people that have ever written in C know, it is a very powerful language that provides you with a very large gun to shoot yourself in almost every body part you can if you are not careful. So we where trained very well before we where let loose writing switching code. One of the other things that was required, if you wanted to make the jump into C++ (Mind you this was when there was no C++ compiler yet, but only CFront which was a pre-compiler/parser), you where not allowed to write in C++ unless you have been programming C for at least 3 years consistently.
There really where not that many higher level languages as there are today.
For the last few years I have seen more and more computer languages born, and in some cases die. And they all try to fix what their authors thought where missing in the languages that came before it. Another trend has been to make languages more accessible and easier to use to people who want to program of all walks of life. Imagine that! A language that does not require a 4 year degree to work in!
Some of these languages for example PHP and Ruby (They sure are not limited to these languages I might add!). They allow people with limited computing background to make in fairly decent programs in a small amount of time.
But this is where some of the security issues are showing up. The languages are becoming easier to use. But a lot of the operating systems they run on really have not become easier. So, many of these programs are now used without the realization on the part of the installer or programmer what the effect and impact of running their programs are on the operating systems. This seems to be a problem on both Linux and Windows platforms.
Although I applaud making programming languages easier for the more casual user, I do see that we are forgetting in many cases to make the environments these programs need to run in safer and easier as well.
I have seen so many times programs that write their files in ‘interesting’ and unsecured places. The presence of multiple libraries that might or might not support the application (heck, I am not sure what makes the thing run, so I will just copy all kinds of libraries in an attempt to make the application work).
File permissions that are set incorrectly, readable by the world. Incorrect owners etc.
And these are just some of the issues that seem to be present. And unfortunately a lot of these problems are easily fixed.
But I think that we need to do more as developers and system architects. Some of the suggestions that come to mind are:
It seems to me that languages need to be developed more with the end user in mind regarding deployment and the OS’s they will be running in. A language can have all the cool features you ever thought off, but if on deployment you create system issues of worse a bad security hole, than it all will have been just a hobby.
I can equate it to getting your drivers license, getting your license is fairly easy (at least in the US it is). And you can get it without knowing anything at all about cars. Car manufacturers have realized this and have made their cars tell the driver what is wrong with it. Now if you keep on driving your car with the ‘check engine light’ on, well than you are on your own.
If we want languages to be adopted and thrive, we better find a way to build in a ‘check program’ light.
by MJM on December 21, 2007 03:49pm
When I introduce myself around here, I usually lead with the caveat: I am not technical. It’s true, I played around with BASIC as a kid, and, in high school, I tore apart a series of Apples in the generally vain attempt to understand how they worked. I even went to university to study electrical engineering and robotics. But I only made it two years in that because, when all was said and done, I simply wasn’t very good at the technical bits and bytes.
I grew up thinking I wanted to study artificial intelligence. Turns out, I was more interested in the “intelligence” than the “artificial.” Much to my parents’ chagrin, that realization led first to the study of philosophy and then to academia. Ultimately, I ended up in the law, where I spent the last 8 years.
About 10 months ago, I left my practice and joined Microsoft. Now, here I am on the Community Platform team at Microsoft, blogging on Port 25. If you are asking yourself why, I don’t blame you. I’ve asked myself that question more than once since I’ve been on board. :)
Most see open source as a technical phenomenon, and indeed it is one of the more important movements in software development of the last decade or so. However, it’s also a legal, sociological and, in many ways, a philosophical phenomenon. These latter aspects make “open source” a fascinating subject for someone with my background.
Bryan has blogged several times about the concept of “participation.” Participation – and the related ideas of access, inclusion and collaboration – are vital concerns in a world of rapidly increasing information and expanding access. When you also consider Bill’s recent blog about networks and “six-degrees of separation,” you can tell that participation and the community it engenders are constantly on our minds around here.
These concepts are fundamental aspects of open source and the focus of my job. As the open source research and policy lead, I examine how Microsoft can better understand and participate in the open source community and how, through its participation, Microsoft can create more opportunity for software developers and users around the world.
Thus, I’m pleased to announce a couple of our activities in 2008 that I hope will advance knowledge and understanding of how IT-based communities come into being and best grow and function.
The first of these is a paper award we will be sponsoring with International Network of Social Network Analysts (INSNA). This award will go to papers that focus on empirical studies of collaboration and collective development of software projects, including the development of open-source software. Related collective products like documentation, support, and design and studies that highlight important group processes and practices associated with robust software will also be considered. More information about INSNA can be found at www.insna.org. The site is undergoing a migration and revision, and the details of the paper awards will be posted in January when the new site goes live. The second activity is Microsoft’s sponsorship of the Computer and Information Technologies Section of the American Sociological Association’s (CITASA) pre-conference and graduate workshop on July 31, 2008 in Boston. This event combines a pre-conference on information and communication technologies (ICTs) and "Worlds of Works," building on the theme of the 103rd annual meeting of the ASA, and a workshop for 20 selected graduate students researching any aspect of the sociology of communications or information technologies.
The program will include a keynote address by the winner of the "Microsoft CITASA Port 25 Award," a series of presentations on ICTs and the sociology of work, especially in distributed and virtual environments, and a series of select student presentations of work-in-progress (on diverse themes within the sociological study of communications and IT) to both a general audience and to a mentor panel of well known and established researchers in the field. For more information, visit http://citasa.ist.psu.edu/pre-conference. These activities are only the tip of the iceberg when it comes to Microsoft’s open source involvement. From contributing code to developing concepts, Microsoft is actively engaged in open source, and is getting more involved daily. I am delighted to spend my time thinking about new ways we can learn about and participate in the open source community. Working with this team and many other people across Microsoft to change (as Bryan puts it) the company’s open source “DNA” is a lot of fun, and I can’t wait to see what we’ll do next. I anticipate and welcome your feedback as we continue to move forward, together.
by Bryan Kirschner on December 04, 2007 08:10pm
There’s been a flurry of articles and blogs about Microsoft’s open source strategy lately, spurred in part by an interview with Bill Hilf (Zachary Rodriques Connolly …and a comment from davidmeyer on my previous post).
Collectively they make me think of a bunch of things to blog about—today I’m going to start with something that struck me about davidmeyer’s comment (--out of unabashed favoritism for Port25 ).
The nub of the matter is that by many measures, Microsoft and open source are both growing. But what is the nature of the relationship..is there a relationship? Are they growing: coincidentally, ships passing in the night in the same general direction? Complementarily, in a mutually reinforcing way? Or despite one another? My impression from reading davidmeyer’s comment (as well as others by other people I respect ) is that statements in the press loom a lot larger in the minds of other folks than in mine as indicators or causes—or both—of the nature of that relationship. What I mean is that once you believe open source and Microsoft are established parts of the IT landscape, talk really becomes the “tail wagging the dog.”
Let me use a little thought experiment to share where I’m coming from: consider the relationship between Microsoft and Oracle. Both companies are, I think, universally regarded as established parts of the IT landscape. As such, both companies devote a lot of effort to direct, head-to-head competition--we can take some type of sustained competitive activity, now and in the future, for granted.
At the same time, both companies devote substantial effort to complementary efforts (Here’s all kinds of stuff at the Oracle .NET developer center – community discussion, technical resources, marketing collateral…and this is one of three including Office and Windows sites). So there’s clearly more than one dimension to the relationship.
So if somebody asked me “what about the complementary relationship between Microsoft and Oracle?” --what would I think about as indicators? I’d look at the technology—like application availability, compatibility, interoperability, and performance.
I’d consider the people and the ecosystem—developers and ISVs.
And I’d want to understand the efforts underway to work together and find joint opportunities, tune and optimize, and innovate.
Probably one of the last things I’d consider as an indicator is what’s happening in the press. And the concept that (for example) whether Larry Ellison and Steve Ballmer had anything nice to say about one another to journalists wouldn’t be something I’d spend much time thinking about at all. This is not to discount the impact of “talk”, and not to discount the reality that what folks read in the media can help make them more excited and confident—or suspicious and discouraged. And Oracle and Microsoft—two discreet companies--are not a directly applicable comparison to considering Microsoft and open source in general. But Port25 principle #3--No comment goes unread & every idea (common sense required) is openly discussed— really jumped out at me as I was reading the items linked above (no, I don’t have the principles memorized--they are printed out and hanging immediately to the left of my monitor…); thus, today’s post.
(And yes, I don’t think it’s even a close call that the indicators I consider important favor an excited and confident view of the relationship between Microsoft and open source—but that’s something I’ll pick up on another blog.)
by Sam Ramji on August 30, 2007 05:57pm
On the day before OSCON officially kicked off I was heading back from the Oregon Convention Center to downtown and ended up standing on the MAX next to someone who caught my attention for two reasons. First, he was the first person to tell me about the Tim O'Reilly/Eben Moglen conversation from earlier in the morning which I had missed. Second he had a cool flash memory microphone that he used to record podcasts on the fly. Turns out it was Baron George (http://www.blogs.sun.com/barton808/), Group Manager, Free and Open Source Software from Sun.
I had a nice chat with Barton (and we helped each other navigate the heavily under construction Portland downtown) and little did I know he also had a conversation with Sam Ramji that day. The podcast has been posted and you can download it here. Enjoy.
From Barton's Blog:
"Topics: Where the Open Source Software Lab fits within Microsoft; How big is Sam's group; When software technologies compete, you win; What reaction does he get when he turns up at FOSS events; Debating Eben Moglen at OSBC -- no one wants patent Armageddon; Is there a Wubuntu in the works?."
by admin on May 30, 2006 11:53am
‘Alone Together’ and a little economics
Time for some confessions. I’m fully addicted to World of Warcraft (abbreviate WoW). There, I said it. I’ve been a long time gamer, particularly Massively Multiplayer Online Games (MMOGs) and Real Time Strategy oriented games (I’ll take you in Starcraft *any* day of the week), but it’s been a while since a game has been able to interject heavily into my life. And although I balance fairly well with Call of Duty 2 on my Xbox 360, WoW is a great game that has me hooked.
I have a relatively busy life, so video game playing is usually relegated to the wee hours, typically after 11pm (I’ve never been a big sleeper), and I’ve even played at 35,000 feet on a wireless/satellite connection while flying on an SAS flight from Amsterdam to Seattle (yes, it’s that addictive). To make matters worse (or better?), when I’m not playing I often like to read about video game theory, particularly papers that research the sociological aspects of multiplayer games. I just finished what I think may be the best paper I’ve read thus far on WoW.
The paper, titled “Alone Together? Exploring the Social Dynamics of Massively Multiplayer Online Games”, from researchers at Xerox Parc and Stanford’s Virtual Human Interaction Lab, is a fascinating look at the sociological aspects of WoW, with some particularly insightful analysis into what really makes the game ‘work’ and succeed. What sold me on this analysis were their research methods: from the launch of WoW (Nov. 2004) the researchers were active WoW gamers and wrote some software extensions to the client side UI to capture interesting game statistics (every 5-15 minutes) while they played. So this study is based on data obtained from the game itself, versus interviews or surveys. This is what I call research! And you can tell just from reading this paper that these guy are real WoW players, not ‘observers’ from the outside.
I won’t recap the entire paper, but I will highlight some of their findings as I think it’s relevant for this forum of discussion on Port 25. Albeit a highly social virtual environment, the authors find that many of WoW’s players, play alone (thus the quote ‘Alone Together’), and their results discuss a different type of ‘social factor’ existing in WoW. The importance of an audience, a social presence and a spectacle are the three factors that they find explain the appeal of being ‘alone together’ in multiplayer games like WoW. In short, the authors tap the issues of what in film theory is often called the voyeur phenomenon – where the audience/viewer enjoys the ability to look inside someone else’s life. When movies truly succeed at ‘suspending disbelief’ it is often because the filmmaker has succeeded at creating a suspension in the viewers mind, convincing them that they truly are the ‘voyeur’ of the action onscreen. Of course, what interactive multiplayer gaming adds to this dimension is the ability to be both viewer and subject of voyeurism, which creates the appeal of ‘watching’ and wanting to ‘be’ the level 60 Night Elf rouge displaying their/your prowess in front of others. Here’s Sam, one of our own OSS lab engineers, showing his stealth-iness by sneaking into Undercity.
Name: Jbauer Race: Night Elf Class: Rogue Level: 60 Server: Frostwolf Guild: Dark Front
This phenomenon, as the paper identified, perpetuates both game play and game satisfaction (i.e., the ‘alone together’ trend).
So how does this relate to Port 25? A point raised at the end of the paper is the need for social navigation tools, to better understand certain dynamics in games like WoW (such as guild cohesion or churn rate). When reading this paper, I couldn’t help but think how this type of research in social dynamics might be applied to software development communities. Granted, when working on code, there are different dynamics than battling basilisks, but there are many principles and characteristics that are very similar – network-based, remote communication, level based grouping and different dynamics of interaction based on level and participation, satisfaction for continued participation in a group, etc. I think it would be interesting to look at possible correlations between these two social networks (MMOG’s and software communities) as it’s often at unusual intersects that we find meaningful patterns. If, nothing else, it will likely result in some useful social navigation and analysis tools.
Lastly, I’ve been doing a bit of flying lately so I’ve catching up on my books to read. I just finished David Warsh’s “Knowledge and the Wealth of Nations.” In particular, in the second half of the book, Paul Romer of Stanford provides a very solid look at the economic issues that direct technological growth. I would recommend reading the section on the costs of ‘idea’ inventions and expected growth and returns. Many business books I read these days fail to recognize some of the essential economic principles Romer investigates – I hope to blog more on this soon.
by admin on April 27, 2006 05:27pm
Seventeenth Century Philosopher and Author Voltaire wrote “I disapprove of what you say, but I will defend to the death your right to say it." This resonates very much with my own thoughts after going through the comments and feedback from my previous blog. I want to thank everyone for providing valuable feedback, regardless of whether I agree with it or not. The principle here is not to choose a side but to put a process in place that allows for an open and honest dialogue and I am exhilarated at the results of this endeavor.
Moving on to “Consistency and Standards” – the short theme for today’s blog. One of the questions put to me recently was to share something that I might have benefited from, during my past life in IT Operations. As I had mentioned previously, I have been involved with IT Operations since 1989 in some shape or form. From a student who worked at the Academic Computing Services in Syracuse University managing Macintosh and Sun Sparc clusters all the way to the past three years when I was managing 7x24 support of Class-A Production Services like AD, DNS, WINS etc. for MSN Operations, I can vouch for the fact that consistency in implementation standards have saved the day countless number of times. The point here is, no matter what platform, toolset, operating system or application you may choose, developing standards towards consistent implementation of these will always reap into rewards towards a lower “TCO” or Total Cost of Ownership in terms of supportability.
I remember a few years ago we were battling the spread of one of the malicious worms across the Internet. We were in the middle of taking inventory of what the configuration of our Production servers providing these mission-critical Class-A services, spread all across the world looked like. We all realized that by adhering to a common toolset, standard SKU’s for Hardware as well as for the OS versions helped us reduce the deployment cycle of a patch from what had seemed like days to a matter of hours. You may ask – “Hey – how does that matter” ? Well, imagine writing a different script for each type of configuration and multiply that by the hundreds of servers spread across the world in eight different datacenters. That’s quite a chore, especially when time is not on your side, you’re facing a crisis and you only have limited number of resources you can muster up for support. You’ll also need to track success and failure of applying the patch across datacenters and monitor where additional attention is needed.
So, what does that mean? That complex environments require standards to work “for and with” IT Administrators. Admittedly,“one-off” or “out-of-standards” configs are very easy to do if we’re trying to please a group or mend fences with a specific customer. But in reality, we’re doing them (our customers) a dis-service but putting their environment in harm’s way and increasing their risks. Why - because supportability of their environment is ultimately our responsibility. So…the more colors we put on the fence, the more painters we will need and the longer it will take……
by admin on May 10, 2006 07:03pm Share your interop story...
...and find fame and fortune! Maybe not, but we will share your story with the World! (At least that part of the world that visits Port 25...)
We're looking for unique and creative ways that you, the visitors to Port 25, have solved interoperability challenges. Whether you've done something creative in your own mixed environment or you've helped a customer overcome a technology challenge, we want to hear from you. Monthly we'll showcase the most unique and challenging stories submitted by individuals, integrators and ISV's on Port 25.
Submissions should include:
We'll work with the project submitter to determine the best way to highlight your success on the site. If your project is chosen we'll send you a Port 25 software care package.
Our first submission deadline will be May 25th at 12:00 am PDT.
Please send submissions to: email@example.com with the subject: "Interop Challenge".
by admin on May 24, 2006 01:57pm
I'm reading Open Sources 2.0 right now. It's a well-composed book of short essays by founders and luminaries of the Open Source movement - people like Chris DiBona, Ian Murdock, Matt Asay, and Danese Cooper, to name just a few.
So far I've read essays by Mitchell Baker (Mozilla), Chris DiBona (Google/Slashdot), Jeremy Allison (Samba), Ben Laurie (Apache), and Michael Olson (Sleepycat). They are all well-written and insightful. The most consistent conclusion that the essays I've read so far is that where development is concerned, Open Source development is not that different from commercial software development - similar (although usually more rapid) lifecycles, requirements and bug tracking. Key differences that the various authors cite are greater passion and willingness by open source developers to go beyond "working hours" to solve problems, and the general lack of interest in writing documentation as opposed to coding. This short summary unfortunately trivializes the excellent essays and I encourage you to buy the book and read them yourselves. I believe that Mitchell Baker's essay in particular offers the most powerful lessons for proprietary & commercial software development companies on how to adapt their practices into shipping open source software. In the Mozilla project, Mitchell was at the forefront of the wrenching practical and emotional shifts required from both AOL/Netscape management and the open source contributors to the project. Interestingly, Ben Laurie attacks the idea that "many eyes make all bugs shallow", one of the key claims about open source software quality. I myself have been a fan of this idea, and I was surprised to see him dispute it. To put his statements in context, however, Ben is specifically discussing security flaws, which he defines as being of a different class of problem from a standard "bug" or software defect. His point is that it takes deep expertise and hours of dedicated effort to find security flaws, and that most eyes cannot see them.
The most provocative essay I've read so far is by Michael Olson, who discusses the concept of a "dual licensing" model in detail. In short, dual licensing is a commercial Open Source software (COSS) approach that uses the GPL to convert full ownership of software IP into a self-sustaining open source community, while selling a proprietary license of the same source to proprietary vendors. The proprietary license grants the buyer more rights, including no reciprocity - not needing to release their own product under the GPL. This way, paying customers get the benefit of the open source product while retaining much stronger IP protection for themselves. Michael's summary of this balanced model is that the licensing & technology combination must be designed so that "Open source users experience only pleasure in their use of the software" while causing enough pain (Michael's word) to enough customers to make the business of selling proprietary licenses profitable.
This comes to mind most strongly to me because of some of the debates I've seen in the comments on Port 25. Some readers believe that any commercialization of open source software is downright wrong, and a violation of the principles of Open Source. Other readers seem quite willing to allow developers of open source software to make a living from their work. I think this may be an irresolvable dispute - a clash of ideals between Open Source as a movement and open source as a development, marketing, and commoditization model.