Follow Us on Twitter
by admin on May 31, 2006 12:28pm
But Then Face to Face I’ve spent many hours over the past few days combing through the comments on Port 25, and the comments about Port 25 on other sites (blogs, industry news, etc.). I was struck by the mix of hope and suspicion. When you don’t know who you’re dealing with, suspicion is a natural result, compounded in this case by years of mistrust of Microsoft’s motives. I realized while combing through the posts and discussions that I never took the time to introduce myself.
I have been a science geek for as long as I can remember – not great for one’s social life but at least I can say I’ve been kicked out of Chemistry class for discussing quantum physics in the back row…
I’ve worked in Silicon Valley for 12 years, and before that studied and worked in San Diego. My degree is in Cognitive Science from UCSD (a combination of neurobiology, artificial intelligence, and cognitive psychology, founded by the great Don Norman). In the Valley I worked as a software engineer for several years, building desktop, distributed, and web applications in C++ and Java, including DCOM and J2EE. I worked at a couple of normal software companies, and worked crazy hours at 5 startups.
My best times in software engineering were at Ofoto (now Kodak) where I ran the web development and middleware engineering teams. We built a highly scalable photo service using Tomcat and Jakarta on Solaris and Linux, built our own Java-based persistence layer, and used XML for internal and external integration – old hat now, but this was in 2000.
After Ofoto, I went to BEA Systems as Web Services Principal Architect. I got the chance to work with the WebLogic Workshop team (aka Crossgain, the high-profile defection of Tod Nielsen and Adam Bosworth from Microsoft) and customers like Merrill Lynch on “web services architectures” – no one called it SOA back then. Workshop became open-sourced as Apache Beehive under the leadership of Carl Sjogreen (now a product manager at Google). In late 2002, I joined the WebLogic Integration team to do technical market development and product strategy. I learned a lot about software strategy and got hooked on the web services management concept, convinced the product management team to build Quicksilver, which eventually became the AquaLogic Service Bus.
In 2004 we started seeing real impact on the core business from JBoss, and my GM Chet Kapoor wanted BEA to get into the open source software game. Top management didn’t like the idea, and shortly after Tod Nielsen left the company, a few dozen directors and VPs followed his lead, as did Chet. Shortly after I’d joined Microsoft, Chet became the CEO of Gluecode and asked me to join. I couldn’t see how their business model could be defended, and stayed at Microsoft. Six months later, Chet sold the company to IBM and became VP of IBM’s Open Source group. I admit that I kicked myself.
I joined Microsoft originally to work directly with startups, in the hope that I could have a positive impact on people pouring their hearts and minds into risky technology bets. After batting 1 for 5 I know how tough it can be. Microsoft generates nearly all its revenue from partners (96%) but gets hammered on lack of innovation, so this seemed like a good fit. In Dan’l Lewin’s group in Silicon Valley (Mountain View) I got to meet and help a number of different startups. During this time, I saw advantages of open source economics for some of these companies, especially in SaaS. It was clear to me that something had to change in our licensing and pricing – two very challenging things to shift. I spent several months advocating within the company for change, with good results.
When Bill Hilf offered me the chance to join the Open Source Software Lab, I jumped at the chance. Open source is a pivot for the software industry at large as well as for Microsoft. I’m very curious to understand the breadth and depth of technologies available in open source, and deeply committed to driving interoperability between open source and Microsoft technologies.
This is longer than I’d intended, so I’ll stop here. For those who want more background, my blog is at http://samus.typepad.com, and I have to point out an interaction I had with Matt Asay, a smart and outspoken leader in the open source community.
PS: In my first post (“Why is it called Computer Science”) one reader pointed out the similarity of the topic to Paul Graham’s brilliant essay “Hackers and Painters”. While the point was made with a degree of suspicion, I’m grateful to the poster for leading me to Paul’s essay. Thanks, cblazek.
by jcannon on August 01, 2006 02:51pm
Today, the IT departments offering and managing various IT Services might find themselves in what I would call a “pressure-cooker”. They are faced with a multitude of tasks and added pressure to maintain daily operations while driving efficacy, managing the growing complexity of Service Offerings and most importantly, doing so while keeping pace with the industry best practices. This has been one of the most explosive areas of growth and re-examination for the past few years. Back in my Ops days, I trained under ITIL i.e. IT Infrastructure Library and MOF i.e. Microsoft Operations Fundamentals to get a first hand look at some of the best Service Management practices in the industry. No matter how good I thought our Service Management practices might have been, I could not help but to think in terms of the maturity level of the Services that can be achieved by applying these principles. When you get down to it, you realize that the heart and soul of effective Service Management lies in how mature the offering and support model is. I have learnt a lot from the ITIL Service Management Essentials course, which I attribute to research and practices that have gone into developing these models. I’d like to share w/ you what made sense to me:
I am very eager to hear back from those of you that are an integral part of the Service Management Lifecycle. Please share your experiences, challenges and learning with us.
Kindest Regards and have a great week ahead!
by anandeep on October 27, 2006 12:27pm
I loved doing development in a research and university environment. You got to write cool code, prove new ideas, break new ground and generally ended up with bragging rights to say “I did an image recognition algorithm on a multi-layer architecture implementing reactive and planning parallelism on an autonomous robot!” The code had to work on your workstation or maybe on a demo machine once. Once you wrote the code, the only people who touched the system were hapless graduate students implementing the next big idea. They had to come to you and you could then dazzle them with your insight! This was “sexy development”!
When I moved to industry and wrote software for day to day use – things changed. Now you had all those people with “manager” titles telling you what to do, and those people called “testers” who told you why your code sucked (you couldn’t logically argue your way out of that because the weasels usually had proof)!. Of course being consummate professionals you adapted. You got the religion of “bullet proof code” and worked on making sure the testers only had “fit and finish” bugs filed against you. Which the intern could work on. That was still fun - a different challenge maybe not as “pure” as designing a neat new algorithm but pretty good nevertheless!
You got past the testers but when they integrated the components that you had bullet-proofed to run end-to-end or user acceptance tests, unexpected stuff happened. Who would have thought that they would configure the machine that way or that another non-surface component could pass you null strings. Now you had to plan not only for the testers – but also for other developers and those pesky sys admin guys. How did they become sys admins? They couldn’t tell a polynomial solution from a log n solution anyway! But being nothing if not adaptable you adapted. You now built bullet proof AND idiot proof code. (My father, a military pilot and flight instructor, when teaching flight safety used to say “Nothing is foolproof because fools are so ingenious!”). It got a little boring at times but you still had the satisfaction of building something that was “engineered”.
I thought I had shipped the product but I found couldn’t sit back and relax. The support guys were making insinuations against my code. It didn’t work they said – and you hadn’t put in the right level of granularity in the logs for them to do a diagnosis. This had nothing to do with Computer Science – any bozo could write stuff to the log. Why didn’t the intern do it? What do you mean he can’t make sense of my code? Yeah, I do know my code best. I guess it’s the right thing to do. Certainly not as fun as designing, bullet proofing and idiot proofing new code but good supportability is “sine qua non” for a well done project!
Is that the end of it? No, further design and coding needs to be done for making software more manageable, to make the logs more systematic, to make sure that the product works when its deployed to multiple configurations, that it performs well and fails gracefully.
Unless you specialize in a certain aspect of manageability, reliability or diagnosis – this is not “sexy” development. I probably wouldn’t get as much satisfaction from designing event logs as I would from designing a new search algorithm.
I was getting paid to do all this (ok, so it was my own startup but I was getting paid in VC money!) and it was still very hard. We did do it but it took lots of coaxing of our developers to pay attention to this. They all preferred to work on the next release that had all the sexy features. Even though they knew that to make the startup successful and still have a job, the unsexy stuff needed to be done and done RIGHT!
When you are working for the “love of the game” and not money, like in Open Source – who coaxes you? Who does the unsexy stuff? Are there enough people who specialize in the esoteric aspects of event logs, that this is not a problem? Or do users who need the feature “just do it” and add the code to the community version? Or are things slipping through the cracks?
I did a sweep of the usual suspect Linux developer mailing lists and found that there is concern about whether unsexy stuff gets done. Here is a typical comment that I saw
“I think that the only issue with Open Source boils down to this:
The things that nobody wants to do, but somebody has to.
Nobody wants to think about documentation. Or user interfaces. These things are hard, tedious, and a hell of a lot more boring than actually coming up with stuff to "make things work".” (from here)
Documentation is famously one of those things that is considered “unsexy” (well, ok in commercial software too). There are efforts like Grokdoc to make documentation of Open Source projects sexy by making it a priority. But the “who does unsexy?” issue is a real concern in Open Source.
We ran into a similar issue with event logs. You know the text stuff you write so that you can find out later what happened. At the lab we just did an investigation of whether we could tell if one of our boxes had crashed from the syslog and from console messages. We were a little taken aback by how many times we couldn’t tell what states the machine had gone through.
On doing some investigation we found that the most influential project that was addressing this issue, the Evlog project (most supported by IBM) has been quiet since 2004. This code is used internally within IBM but was not mainstreamed into the Linux kernel.
How does one get unsexy stuff like this into the Linux kernel so that is comparable to UNIX/VMS/Windows?
I contend that it is critical to Open Source that attention be paid to the event logs. They are critical in making any operating systems reliable. VMS/UNIX/Windows all went through the process of making their event logs more meaningful – and this has helped make them much more reliable.
We will be addressing this further in the next couple of weeks – keep tuned!
by MichaelF on August 14, 2006 11:30am
We had a large presence there, because we do believe that open source as a development model is here to stay. Bill Hilf was at the conference and Port 25 has some of the interactions he had with open source luminaries Tim O’Reilly and Matt Asay. While Bill was having these interesting conversations, we at the OSSL (Open Source Software Lab) were busy attending the talks at the sessions and collecting “swag” on the exhibition floor! I do have swag from HP, Google, Intel, Dell, AMD, Oracle, ActiveState, Solid and MindTouch but interestingly IBM was missing. Anyone out there have an IBM t-shirt to exchange for our Port 25 t-shirt? (see accompanying pictures)
The buzz in the air, appeared to me to be about open source both as and in business. The talks I gravitated towards (naturally) were about open source development practices. These ranged from taking closed source products and turning them loose as open source projects to driving pure open source development to using experts in a particular domain as contributors for a project not thought suitable for open source.
There was a common thread running through all these talks – the critical nature of development practices. No, there wasn’t anything earth shattering – these were development practices that are accepted as “goodness”. But the forces surrounding open source development made the use of these practices almost a necessity for projects to get of the ground. This is not to say that closed source companies do not follow these practices, but due to co-location, centralized management and other circumstances that go along with commercial development, some of the practices may not be rigidly enforced and the lack of these practices may not impact the product as much. Open source development does not have that luxury (I refer only to successful open source development projects, not the long tail of open source projects that are fossilized on SourceForge and other repositories).
The practices fall into the following categories
Consider this, you browse to an open source project which is new to you and download it (could be from repositories such as Sourceforge or Codeplex). It doesn’t install and takes a lot of wrassling to run. More often than not, this first impression decides your level of participation. If you can’t find something cool, try it and run it – there are other fish in the open source sea!
The initial install is not the only thing that has to run right, as an open source developer working on a module, adding/modifying some source code, building the source code and running it are part of the iterative process that lets developers be productive. A system which doesn’t make the dependencies transparent and which doesn’t have a build system to include all the necessary files (and NOT include the unnecessary ones) will probably not get good developer input.
The easy build thing has been known for a while. At Microsoft, product groups have the concept of daily builds – if you as the developer “break” the build, you don’t go home till you fix it. In order for this to work, each developer should be able to build the system on his desk easily.
Quick iterative program development in the large without hassle is the name of the game. The very nature of open source development which needs to attract developers to gain momentum leads to a focus on easy builds.
There’s doc (or a blog or a newsgroup) for everything
Most open source developers don’t work in the same building. I am talking about open source developers in the community, not those employed full time by commercial open source companies. (Though most commercial open source developers have to interact with community developers on their virtual team). This means you can’t walk to the next office and ask the developer about how the API call really works!
It follows that at any time of day or night, answers to questions like “how does the API call really work?” should be available through internet accessible means. This could be doc, a newsgroup, a wiki , a blog or any other easily accessible repository.
Documentation by developers, you ask, isn’t that a mythical being?
That’s exactly the point – open source developers do write docs, they just don’t recognize that they are doing so. In order for an open source project to truly take off, education of new developers is a must - both when they are viewing code and when they are looking at documents explaining interfaces, how things work and the meaning of life! Ok, maybe the last thing is not strictly necessary, but it does make reading documentation much more fun. Who wouldn’t want to work on Ruby after reading “why’s (poignant) guide to ruby”?!!!
Every lifecycle stage and artifact is important
The way you work your way up the “committer” chain in open source projects is to prove yourself useful. The path to building credibility is to write documents, find bugs, review codes and make yourself useful in a pretty stiff meritocracy. Even when a developer achieves the golden “commit” privilege they continue to participate in those activities.
Not having departments with people exclusively devoted to test, doc or reviews makes the development of a “caste system” difficult! Development managers cannot put pressure on test managers to shortcut tests – because the development managers and test managers could be the same person!
This is a little bit more subtle than the “more eyes make all bugs shallow” argument – that is only true if those eyes don’t think of looking at bugs as work that is to be done by other eyes!
This is even truer of documents and education – it isn’t some tech writer with expertise only in writing who writes the important documentation, but the luminaries of the community. When the “Gurus” (which in Sanskrit means teacher) do what they are meant to do – then nirvana is attained (I loosely paraphrase from the Bhagavad Gita!).
Sprints not marathons
Consider having developers in the US, UK, India and Australia – when is the best time for a meeting? When it’s morning in the US, its night in India – and who knows what time it is in Australia? Software companies whose code is all developed by their own employees can have coordination meetings on a schedule decided by someone – not so in open source. This means that coordination can’t be complex and long drawn out.
So open source makes use of XP principles, work on a small feature that doesn’t take more than a month (ok, so it isn’t the extreme XP where the feature shouldn’t take more than two weeks!). Based on community pressures, priorities can be decided. Longer term projects are either done by a single person or by a co-located team (by commercial open source companies for example).
That means planning horizons are small, and mistakes can be rectified without huge loss of time. Releases happen when there is a critical set of features ready. The community is able to get their hands on new features early and give early feedback, which further cuts down the time for stable development.
Of course this means that customers are running hard just to stay in one place, if they accept all the releases! But at least they have the choice…!
Visibility into EVERYTHING
Open source is not just about the source being visible. More people do look at server side code, but even on servers the number of code readers is small compared to the number of contributors.
Visibility in open source is about everything – how many and what bugs are there, time to resolve bugs, prioritization of bugs, who is contributing what, what comments were made about whose code, whose code was included and whose wasn’t etc. etc. etc.
This not only acts as a great feedback mechanism to users, it provides for real and open debate about priorities and execution. As long as the project is handled on a rational basis, people can predict the state of the project. They can anticipate when a feature they want or a bug they have will be fixed. It also allows users to submit code to fix bugs for their own problems and see it transparently go through the system.
My full time job at a previous position was mediating features for a group of customers (numbering in the 10s). This required the full energy of a team, which was not a development team, to gather this information in a closed source environment and then disseminate this information. Of course there were mistakes in information gathering and communication, since the customers only got a view of the project through an intermediary. Building trust with the customers took the better part of the year and resulted in a development process that was not as efficient as it could have been.
So flame wars notwithstanding, visibility into everything is an advantage for open source projects.
Community, Community, Community
In order for developers to be productive they have to communicate with whom they want, when they want and get back what they want. This means there is a burden on the open source product’s leaders to make sure that this responsiveness is part of their community. The artifacts used are answering 250 e-mails a day, having IM on all the time and putting systems in place that make this possible. One open source company I know of uses categorization software just so that the appropriate person can look at an e-mail and has fast mail systems that allow sub-second previews of the e-mails!
What this gains the company is encapsulated in a quote from an OSCON presenter – “When we had a closed source product people worked 9 to 5, but with open source there is so much interaction with the community that our developers are strongly motivated to work on finding solutions and building features and they are much more productive!”
Actually come to think of it, there isn’t a thing here that I have said that wouldn’t work where I work! In fact these processes are already at work at Microsoft. I am not only talking only about the work that we are doing at Codeplex, releasing source for products such as Power Toys for Visual Studio Collection available on Codeplex or the WiX tool available at Sourceforge but also about the code sharing internally within Microsoft. Since Microsoft does both platform and application development, application developers often need and have access to the bug databases and source code of platform level components. There is a lot a give and take between teams of users within Microsoft. This visibility has also been expanded to users such as academic, government and enterprise users under license agreements with Microsoft.
by Bryan Kirschner on December 14, 2007 10:11am
My participation in technology was transformed by the Commodore 64. That's why I--like others here at Port25 and over at Slashdot--still love it after 25 years. Natales posts: "I can't emphasize enough how "mind shaping" was learning assembly language on the 6502..." Neither can I. I was 10, and needed to learn assembly to make a game I was writing run faster. I still remember there was a free 4k block of memory up at register C000 (49152) you could use to stick your assembly code in. "Participation" is a theme you've probably picked up on here at Port25. That's not just because most of us here share some sort of experience that enabled us to participate in technology in new and rewarding ways. It's also because it's an important element in enabling Microsoft and open source to "grow together." I am confident about Microsoft and open source growing together. With that said, it's a fair point to make that the best of open source is not-- yet! --established as a universal part of "Microsoft DNA." But a tradition of growing opportunities to participate in the opportunities offered by technology is. It's easy to forget today that providing free SDK's for developers was at one time a significant departure from common industry practice -- a business model innovation. Business and technical approaches that enabled third parties to develop on top of a "platform" are a part of Microsoft's heritage. The importance of growing the number of people able to participate in that ecosystem as creators or entrepreneurs is widely understood as simply smart business. Following Tim O'Reilly's insight, we think broadly about the "architecture of participation" as "systems that are designed for user contribution." One thing we do is work day by day to learn how open source concepts and approaches offer new or enhanced ways to grow participation. And then we work to understand what's already being done across Microsoft--and what could be done that's new or different. After a talking with folks here (Bill Hilf is an-ex C64 hacker and Sam Ramji got started on a PET) I realized that understanding the people and projects and perspectives of our open source community inside Microsoft isn't possible without more transparency about this idea of "participation." So this blog is an introduction for further blogs--and some new bloggers--on the ways in which we're working on and thinking about growing participation now and in the future, whether by effecting change at Microsoft, sharing information more broadly about opportunities that already exist, or working with leaders in the technical and academic communities on new ideas. (And if the Commodore 64 changed your life too, by all means chime in--or share what other technology made a big difference for you!)
by MichaelF on April 13, 2007 11:10am
May 22nd through the 23rd OSBC, now produced by Infoworld, is being held at the Palace Hotel in San Francisco. Microsoft is a Silver Level Sponsor this year and we look forward to the discussion and networking.
Just a couple of quick notes:
We look forward to seeing you in San Francisco!
by anandeep on November 16, 2006 07:39pm
One of the great things about my job at the Open Source Software Lab (OSSL) here at Microsoft (besides being able to work with both Linux and Windows!) is that I get to go computer science research conferences. I try not to attend the purely academic ones, but the ones in which both industry and academic research issues are addressed.
I just got back from ISSRE (pronounced “is-ree”) i.e. the 17th IEEE International Symposium on Software Reliability Engineering, 2006. This conference talks about everything that impacts the reliability of computers – this includes everything from “drivers of reliability” to “testing to ensure reliability” to “doing static analysis of programs”.
Skeptical that anything they talk about here would be useful to y’all? Well, think again! They have all kinds of practical advice on doing things right. The talks I really enjoyed included
Only one of the above talks was from an academic institution, the other two were based on experience with software being widely used in the consumer and application server space.
The one thing that I enjoyed the most was a tutorial on “Software Productivity and Reliability – Tools and Techniques” given by Prof S C Kothari of Iowa State University. The tutorial title is appropriate but I think what it should have been is “Learn to Read Programs Properly!”
Kothari believes that a lot of attention has been paid to what he calls “Program Writing” – developers tools and such. This has resulted in the creation of very complex software artifacts. Most real world applications today are built on these already built complex software systems.
The problem is that almost all academic institutions and programs focus on the inventive aspects of programming. This means that they teach algorithms and techniques assuming that everything will be written from scratch. Real life is of course never like this – it is difficult if not impossible to be a computer software professional these days and work just with your own code. More often than not, most developers have to wade through other people’s code to understand, use or modify it. Developing software today involves a lot more than just writing it.
The skills to “read programs” are acquired the hard way – and sometimes never fully mastered. Kothari suggested that there needs to be an emphasis on program reading in training and that tools need to be built to aid in reading programs and forming the proper mental model of them. The barrier to future software productivity is not machines or algorithms but human mastery of the complexity of the vast amount of critical software out there.
Program reading is not easy, as most people in open source know! This is due to
There are some tools that are available to assist in program reading such as CScope (BTW Hank Janssen of our lab wrote parts of CScope) but there has not been a lot of attention paid to WHAT program reading needs in order to address the complexity issues raised above. Kothari has a company Ensoft that provides some very cool tools to do the kinds of things that are needed for reading complex programs. The tools are based on abstractions that are used in program comprehension (there is a IEEE Conference on Program Comprehension held every year). Kothari illustrated one that he called “matching pair” (MP). Matching pairs are defined by a syntactic pattern – which could be artifacts (such as matching parentheses) or events ( such as locking or unlocking a resource). There are many types of such matching pairs and to make a program correct a matching pair can be defined with respect to control flow, data flow or both. A control flow matching pair means that a function f would need to be followed by a function f-inverse in EVERY execution path that the program could take. Looking through every execution path is hard (and it is proven that to do it via automated static analysis of programs is an intractable problem) – especially in something like the Linux kernel.
Using the tool that Kothari demonstrated – a call graph was generated and a “query language” defined over call graphs. Looking for matching pairs using the tool became unbelievably simple. This was just one of the things that can be done to reduce the complexity and time taken to figure out what a very complex program was doing.
I think this is a real breakthrough – and I am now a confirmed advocate of program reading. I am hoping to work with Prof Kothari to do some more stuff with this – I hope to share the results if I do end up doing that.
Why do I mention this on this forum? This is something that open source developers and IT Pros have been doing for a long time. Open source developers have a culture wherein a lot of code reading is encouraged. And IT Pro’s have to constantly update and upgrade scripts that they use to control and run their infrastructure. The cultural advantage lies with open source developers and IT Pros but given the complexity of software is increasing exponentially everyone could do with a little help
by jcannon on September 14, 2006 02:09pm
Over the past weekend, we discovered that some of the comments being posted to our blogs were being caught by the Community Server spam filter. Usually, this wouldn't be a bad thing - especially if you were around when we launched Port 25. However, the algorithm for catching spam had been unknowingly set to the strictest interpretation due to a recent server upgrade...so many benign comments had been caught over the past couple weeks.
This has been corrected & all comments have been set to publish. If you've had this happen to you - we apologize. Comment away! If you ever feel like something on the site isn't working the way it's supposed to - please let us know.
To make up for our oversight, the video below should provide a good laugh for anyone who works in IT - regardless of what operating system or development models you subscribe to :)
Thank you Long Zheng (I Started Something) for blogging about 4.
by MichaelF on October 23, 2006 12:34pm
Taking a brief detour from the thread about OSS and its similarities (or not) to law to take note of a couple recent publications, both of which discuss the interaction between traditional IT vendors and OSS:
In MIS Quarterly (September) (link) Brian Fitzgerald (University of Limerick—one of the must-read researchers on OSS, IMO) provides a comprehensive survey of what he calls “The Transformation of Open Source” with expectations for “Open Source 2.0.” He expects IT vendors—including Microsoft—to play significant roles in “OSS 2.0.”
In Communications of the ACM (October) (link) Pamela Samuelson (UC Berkeley) discusses “IBM’s Pragmatic Embrace of Open Source.” (The title pretty much speaks for itself as a summary.)
I highlight these because they reflect what seems to me (qualitatively) to reflect a trend in the literature. We’ll work on getting a better sense of what the trend is and researchers’ perspectives on it to bring back to Port25…because (to bring things back to analogy and metaphor), they introduce the question: Is a vendor in my OSS more a fly in the ointment or chocolate in my peanut butter?
by Garrett Serack on June 21, 2007 06:50pm
I'm pleased to announce ... er, myself, as the Open Source Community Lead here at Microsoft.
I'd have left this to Sam, but hey--why should he get all the fun.
I'm responsible for building communities of Open Source developers around Microsoft's platforms, both externally, and internally--yes, this means the product groups. I'm really interested in what kinds of things we can start building as Open Source software, and illuminating what we've already done.
I said a few things the other day on My blog that I think I bears repeating:
This is a pretty wide reaching role, meaning that I touch a lot of ground. Some of the highlights:
This is a pretty wide reaching role, meaning that I touch a lot of ground. Some of the highlights:
There have been a lot of changes in Microsoft in the last few years, that folks can't yet see, and I'm hoping to expose that type of thing to the world, and bring the world of Open Source to Microsoft.
There have been a lot of changes in Microsoft in the last few years, that folks can't yet see, and I'm hoping to expose that type of thing to the world, and bring the world of Open Source to Microsoft.
I'm not going to espouse the great plans I have in too much detail... I've found that actions speak louder than words, and have far more lasting impact than the words do. I'm focusing on what Microsoft is doing, and less on what has been said. I mentioned that too in my blog:
I don't get it... Microsoft and Open Source? Are you sure?
I don't get it... Microsoft and Open Source? Are you sure?
I know... I know. Y'all got some reservations about Microsoft with regards to open source. Well, I'm not going to try convince you of anything. What I am going to do is to shine the light on the things Microsoft is doing to create communities in the Open Source world. Add to that, I'm doin' some rustlin' inside of the company itself--as expected, there are a few tenderfoots 'round here who would just soon reckon' we didn't bother. Well, I got a cattle brand heatin' up just for the conversation.... We'll just see about that.
I know... I know. Y'all got some reservations about Microsoft with regards to open source. Well, I'm not going to try convince you of anything. What I am going to do is to shine the light on the things Microsoft is doing to create communities in the Open Source world.
Add to that, I'm doin' some rustlin' inside of the company itself--as expected, there are a few tenderfoots 'round here who would just soon reckon' we didn't bother. Well, I got a cattle brand heatin' up just for the conversation.... We'll just see about that.
Somethin' about me:
I joined Microsoft in the fall of 2005 as the Community Program Manager of the CardSpace team, and I've been working with companies and the open source community to build digital identity frameworks, tools and standards to shape the future of internet commerce and. I'm also co-writing a book titled Understanding CardSpace, which should be available in the fall of 2007. Prior to moving to the Puget Sound area, I've had a lengthy career as a Software Development Consultant, moving from Developer, to Architect, to Mentor over the course of the last 16 years. As life-long code-monkey, I've pounded out code on more than 20 platforms and 35 different languages, and I see no reason to stop there. I've put code into many open source projects, and I'd like to think that I share a very strong part of the Open Source vision that permeates information technology everywhere. You can catch all my posts on my blog at http://fearthecowboy.com .
In my next blog post I'll detail the promise--that is my commitment to the community. I think it's important to know what you can expect, as well as my boundaries. I'll also have communication channels setup so that you can talk to me; either publicly, or via confidential email.
by Garrett Serack on March 03, 2008 02:00pm
Day two moseyed late into the night...well for me anyway--cowboys wake with the sun.
Day three turned out to be a day full of surprises for me--most of the sessions were significantly more interesting than I would have guessed.
We started the day with a presentation by Bill McKinley on Windows Logo Certification (for which there is a great little quickie primer here). I highly recommend checking this out--the logo certification program provides some tools to assist with certification validation, and even if you have no interest in certification, running the tool will give you a rundown of potential issues that your customers will face.
After a break for more testing, Rob Mensching and Peter Marcu dropped by to give the team a thorough examination of WiX (the open source Windows Installer XML toolset). Again, very cool stuff. Admittedly, there seems to be a somewhat steep learning curve, but it integrates nicely into build scripts, and has all the flexibility you'd ever need.
After lunch, we did some testing, with a quick little jaunt to the Microsoft Company Store, where the attendees took advantage of Microsoft Employee pricing on some software and hardware.
We rounded out the day with a session on Windows Error Reporting -- you know when an app crashes, and you can send anonymous debug info to Microsoft? The information ends up in the WER system, where developers can register to get crash and hang information for their software and drivers. I knew that the information was collected, but previously, I had no idea how easy it is for app developers to get their hands on the data. I strongly recommend that you check it out.
While Wednesday was the last day for most of the attendees, a few stayed through Thursday, and I'll post a wrap-up on that tomorrow.
by jcannon on January 10, 2008 01:57pm
It's going to be a busy couple months in the open source industry, with a number of influential conferences convening over the next six months to discuss the latest issues, advances and topics facing OSS. More on those later, but I wanted to get something quick up on one in particular that Microsoft is participating in - the MySQL User Conference in April. Folks may remember our sponsorship in 2007, (Bryan has a good read on this) - and I'm happy to continue this support and participation.
As part of our sponsorship, we've negotiated a discount for registration we can extend to our community. If you're interested in attending, register at http://www.mysqlconf.com and enter code: mys08micr. This will give you a discount of 10% against the cost of your ticket....we hope to see you there.
PS. Thank you to our friends at MySQL, in particular Kaj Arno for the continued support. (Kaj - thanks for the calendar ;))
by Mark Stone on March 30, 2009 04:15pm
We should never forget that a key motivator for open source developers is fun. For student developers -- where open source really starts -- this is especially true.
We’ve been looking at several potential student projects in Croatia, and for the past several months have been lending some support to the PlugBlog project.
In many ways this is a classic open source story. Croatia is not a large country (population 4.5 million), nor does it have as highly developed a technology sector as, say, Scandanavian countries of comparable size. Combine that with a distinctive language of Slavic origin, and you have an environment in which there is very little motivation for commercial software providers to offer Croatian localization. Thousands of languages and dialects world-wide struggle with this same problem: they simply lack the critical mass and market opportunity to warrant commercial software localization.
Into this breach steps open source. Several local blogging sites in Croatia do, of course, post blogs in Croatian. But bloggers would like to have the client tools to compose in Croatian as well. Given the popularity of Windows Live Messenger as an instant messaging client, there was a natural opportunity for open source development to create a localization pack enabling Live Writer composition in Croatian. This is precisely what PlugBlog aims to do.
One of the interesting twists on life in the era of Service Oriented Architecture (SOA) is how enabling SOA is of open source. Plugins for Live Writer can easily be open source independent of the source code status of Live Writer itself, because these plugins need only make web services calls to the Live Writer API. Indeed, a quick search of Codeplex shows more than 60 open source projects dealing with Live Writer. This is the kind of thriving little sub-community that SOA makes possible.
The developers working PlugBlog are students, and they are doing this work as a student project. As such, it has a clearly defined project plan and specific milestones for the project. The work they are doing will provide a valuable localized tool to Croatian bloggers, but it will also serve as an example of how other languages could integrate localization with Live Writer. This is all great, but you can’t stop developers from doing something just because its fun.
So I was surprised to see a check-in on this project that creates a connector for passing data from Skype to Live Writer. This wasn’t on the project plan. Talking to project coordinator Boris, he mentioned this was an extra they threw in in their spare time. Given the huge popularity of Skype in Eastern Europe this shouldn’t have been surprising, and indeed if anyone had mentioned it during project planning it almost certainly would have been part of the original design.
But this too is part of the beauty of open source: user-driven innovation fills the gaps overlooked originally. I look forward to more Skype integration and more pleasant surprises from the Croatian team.
by hjanssen on May 02, 2007 07:11pm
So here I am, Amsterdam May 2nd 2007. At the Apache Conference. (A Microsoft person at an Apache Conference, what is this world coming to??)
I am going to blog from the Conference until it is over.
So, today the conference started in earnest with all the tracks kicking off. The first day was one of technical training. But this second day is where all the sessions started.
It started all with Sander Striker President of the Apache Foundation. He described very high level what was to be expected in the next few days, and he talked about the following.
He describes describes ASF, Est. June 1999. Non profit 501(R )(3) charity
He talked about how ASF is much more about community than about code, ASF manages communities, not code.
As with most projects, Open Source or otherwise, there is a tendency of burnout. He wants to make sure people stick around at the ASF by making sure there is an environment of Healthy community through: respect, open discussion, shared views and direction.
Today there are 43 Top level Projects (6 more than last Apachecon, October Austin - 2006.). There are also 31 projects in the Incubator (compared to 38 at last Apachecon). Overall he expressed his belief in that the future is looking bright and ASF being very healthy.
Also, today there are 1500 Committers worldwide, 220 Members. Membership is about the individual. Not corporations.
He closed by saying that People have a tendency to burn out in the infrastructure portion. It is a tough job to keep doing.
Picture 1: Here is a shot of the attendance during the keynote and introductions
Being notoriously bad at guesstimating the total number of attendants at any event, I am guessing that there are about 250 to 300 people here.
A question from the audience resulted in a very interesting answer. The question was how do you become a member. The response from Sander was:
Become a Committer first, and provide good quality work. If you keep contributing you might be proposed as a member. This will be subject to a vote.
Become a Committer first, and provide good quality work. If you keep contributing you might be proposed as a member. This will be subject to a vote.
But the description of a clear path to become a member is somewhat unclear from my point of view. I would think this path is more defined for those people wanting to become way more involved.
Next up was the key note delivered by Steven Pemberton, Researcher at the Center of Math and Computer Science. His keynote was called:
Abstraction and extraction: in praise of
Abstraction and extraction: in praise of
He talked about abstractions of programming languages. And then went into how complicated these abstractions still are today. Yet daily interaction with objects can lead us to confuse the concrete with the abstract.
One of the nice things about programming languages is that they abstract away detail, like how data structures are implemented, how procedures are called. Etc.
He described a talk by Kernigan and Ritchie that he went to in the 70s where they were talking about Unix and C. This gave me a nice flashback and I am starting to feel pretty old! Thanks!
Some of the things we are struggling with today where the result of mistakes that were made when UNIX/C came to be. He talked that in his view that UTF-8 today is the result of the way they conflated characters worth units of store.
The intention of his talk was to speak more about usability, and designing for usability.
I have taken many notes when he spoke and I am trying to compose them back into his keynote. Bear with me while I try to reconstruct my notes. :)
He stated that you shouldn't confuse usability with Learnability. They are distinct and different. What he means with that is that if you want your software to be used by a large audience, you need to make is usable. Emacs (Still my personal favorite) is a powertool, you can do great things with it. But it is not what I would call usable. (powerful? Yes, Easy to learn? Not so much.)
What are the features of websites that you go back to regularly. The thing that differentiate them from other websites with the same purpose that you don't go back to.
Forrester research found 4 reasons for this.
Yet Usability is usually the first thing scrapped when web sites are built. This seems to be applied to the design of software as well.
Eric Raymond, stated that making good software requires a lot of money to make sure it is usability tested and designed. This takes a large company with a large amount of money. OSS has not solved this problem yet.
Programmers like the command line, they are much more intuitive. ("Sensories" like much more graphical design). OSS programmers are intent with their use of the interface, yet the rest of the world is not. The rest of the world is much more Sensory.
A Dutch Magazine places GIMP last in it's review because of it's poor interface.
US Department of Defense discovered 90% of cost of SW production is debugging.
For example AJAX empowered page is a lot of work, Google maps, poster child of Ajax generation is more than 200k of code. He asks if it truly have to be this hard?
He made a really funny comment, while preparing his presentation he checked to see how much processor usage was going on on his machine. Then realized that his machine had dual core. And discovered that his computer is now twice as idle as it used to be. :)
Centre of his talk was really about usability. Much more so as it relates to languages. And I will give a plug here, it is basically the same argument he made as I did in my blog a few months ago. (He probably was more elegant in describing it) A link to the blog I wrote can be found here; Languages are becoming way too easy. In there I make the argument that languages are becoming easier yet they and the operating systems they run on have not kept up. (Meaning both have a really hard time protecting the programmer from the outside world :))
Some more data he gave that I found interesting: Computers have become 40 times faster in 25 years, Programmers managed to become 2 to 3 times faster maybe over that same time period. Which is because you still need to do to many things in languages. The example he gave was source code he found to display a clock. The clock part was only a few lines of code. But the rest of the 1000+ lines were taken up by setting up the framework. Making sure redraws and sizing are handled etc etc.
I will leave it at this for now. There is a lot more to write in the next few days, and I need to start reducing my blogs, they are becoming way to long!
Stay tuned, more to come in the next few days.
by Bryan Kirschner on January 26, 2007 12:45pm
I started this chain of blogs about the law-and-open-source–analogy based on something Matt Asay had written that struck me as interesting—but didn’t sit with me as quite right. So it seems appropriate to tie up this set of blogs with something he wrote that seems to me to be entirely right, and helped frame a comparison about law-and-open-source that makes a lot of sense to me.
I previously blogged about the fact that legal knowledge actually seemed more “open” than “closed.” I also kicked around the idea that the supply of lawyers was especially “artificially” restricted. Without rehashing those discussions, I didn’t find those to be compelling as singular ways the legal domain was different from the software development domain.
Then I read a recent blog from Matt. Blogging about companies for whom it is an “open question” if they “are abandoning some of the community benefits of open source by having some of its technology proprietary,” Matt wrote that such companies are “trying to balance being a good community citizen with getting paid. It's not an easy problem to resolve” (my emphasis).
I read this statement as characterizing what a lot of companies and individuals—open source or not, in the software business or not—are trying to do. Specifically thinking about the legal analogy, here’s what struck me.
Among friends, family, and colleagues I happen to know a lot of lawyers (probably a result of both my wife and brother being lawyers rather than a character flaw on my part (lawyer joke)…). Thinking of this balancing act Matt describes, I realized that:
One lawyer I know, a litigator at a firm, recently made a change to a corporate (non-legal) job, in which he’s making about the same as he had been practicing law. In both cases, a pretty respectable overall salary.
Another lawyer makes literally half what he did as a litigator, working in a public (government) agency.
And one more lawyer I know makes literally a third of what he did, working for a non-profit doing public interest law.
And of course, lawyers always have the option of providing their services at no charge (pro bono)—the American Bar Association recommends 50 hours per lawyer per year.
(Qualitatively I would have to say intrinsic job satisfaction among the lawyers I know is inversely related to total compensation against these four cases. The assumption here is that, for most people, being a “good community citizen” is in itself at least somewhat rewarding.)
Additionally, individuals have broad latitude in the US to represent themselves (pro se), generating a “do it yourself” supply of legal services.
For the sake of plotting everything on two axes, you could plot the total units of legal service delivered (y-axis) against the payment received by the “deliverer” for that unit (x-axis)—in this view paying someone else would be a negative payment while DIY would be a 0 payment. (I don’t know exactly the shape of the curve here…but it actually doesn’t matter much for the sake of the thought experiment.)
Now let’s think about open source code with a similar plot in mind. Lakhani and Wolf’s open source developer study found 40% of developers were being paid while developing code. So on the chart above they somewhere in quadrant 1.
Cutting the data a different way using cluster analysis, the authors show that 80% of the sample is essentially covered by work need/payment (cluster 1) or two clusters where “intellectually stimulating” and “improves skill” lead by a long shot as motivations among predominantly unpaid developers (see page 13 of the linked document). This is code in quadrant 2 of the chart. “Obligation from use”—that is, giving back, something that seems like a commitment to giving back through pro-bono work—is the far-and-away leading motivator in the fourth cluster.
If we plotted all the units of code—open and on top of it not-open—in the world, would the shape and motivational dependencies look much different from all the shape of all the units of legal service?
In quadrant 2 (no payment received by deliverer) in either domain would be a lot of things that “scratch your own itch” or are themselves stimulating and rewarding. In quadrant 1 would be whole lot of things for which these characteristics apply less or not at all (or perhaps inversely—they are painful or tedious or otherwise unrewarding…).
This brings me all the way back to the question of law, software, and “openness.”
I took issue with the idea the primary “problem” with the legal domain was it wasn’t “open” enough. I think the process of thinking through comparisons between law and software illuminates the fact there is no single magic bullet for increasing the supply of something—whether code or legal services—at lower cost. (Note I’m taking a macroeconomic and long-term view here: although I work for a company that sells software, in the grand scheme of things more-for-less is probably good for commercial interests and consumer interests alike; neither I, nor, I hazard a guess, whoever defined that product strategy for Microsoft feels “bad” that tens of millions of end-user developers can DIY hundreds of millions of lines of code using the Excel macro engine and Office object model. That product substantially “shifted the curve” and was a win-win overall.)
Making more knowledge—through exposure of raw artifacts like source code or judicial decisions—or pedagogical materials can shift the curve. So can increasing the supply of people who are trained to be developers or lawyers. So can decreasing the difficulty of delivering a unit of service or code (XNA is really interesting me in this respect….). So can make making delivering some unit of service or code more intrinsically rewarding (fun, tear-jerking, intellectually stimulating, guilt-assuaging….). So can making it more financially rewarding to deliver some unit of service or code.
Balancing being a good community citizen with being paid is a hard problem to solve because there is no one single formula for striking the right balance. But I find that to be a good thing: that translates, to be sure, into many opportunities to try to strike the right balance and fail—but also many opportunities for diverse individual, communal, and corporate co-exist, adapt, prosper—and surprise. A world where one size does not fit all may be messy but it is a lot more interesting.