Follow Us on Twitter
by anandeep on November 16, 2006 07:39pm
One of the great things about my job at the Open Source Software Lab (OSSL) here at Microsoft (besides being able to work with both Linux and Windows!) is that I get to go computer science research conferences. I try not to attend the purely academic ones, but the ones in which both industry and academic research issues are addressed.
I just got back from ISSRE (pronounced “is-ree”) i.e. the 17th IEEE International Symposium on Software Reliability Engineering, 2006. This conference talks about everything that impacts the reliability of computers – this includes everything from “drivers of reliability” to “testing to ensure reliability” to “doing static analysis of programs”.
Skeptical that anything they talk about here would be useful to y’all? Well, think again! They have all kinds of practical advice on doing things right. The talks I really enjoyed included
Only one of the above talks was from an academic institution, the other two were based on experience with software being widely used in the consumer and application server space.
The one thing that I enjoyed the most was a tutorial on “Software Productivity and Reliability – Tools and Techniques” given by Prof S C Kothari of Iowa State University. The tutorial title is appropriate but I think what it should have been is “Learn to Read Programs Properly!”
Kothari believes that a lot of attention has been paid to what he calls “Program Writing” – developers tools and such. This has resulted in the creation of very complex software artifacts. Most real world applications today are built on these already built complex software systems.
The problem is that almost all academic institutions and programs focus on the inventive aspects of programming. This means that they teach algorithms and techniques assuming that everything will be written from scratch. Real life is of course never like this – it is difficult if not impossible to be a computer software professional these days and work just with your own code. More often than not, most developers have to wade through other people’s code to understand, use or modify it. Developing software today involves a lot more than just writing it.
The skills to “read programs” are acquired the hard way – and sometimes never fully mastered. Kothari suggested that there needs to be an emphasis on program reading in training and that tools need to be built to aid in reading programs and forming the proper mental model of them. The barrier to future software productivity is not machines or algorithms but human mastery of the complexity of the vast amount of critical software out there.
Program reading is not easy, as most people in open source know! This is due to
There are some tools that are available to assist in program reading such as CScope (BTW Hank Janssen of our lab wrote parts of CScope) but there has not been a lot of attention paid to WHAT program reading needs in order to address the complexity issues raised above. Kothari has a company Ensoft that provides some very cool tools to do the kinds of things that are needed for reading complex programs. The tools are based on abstractions that are used in program comprehension (there is a IEEE Conference on Program Comprehension held every year). Kothari illustrated one that he called “matching pair” (MP). Matching pairs are defined by a syntactic pattern – which could be artifacts (such as matching parentheses) or events ( such as locking or unlocking a resource). There are many types of such matching pairs and to make a program correct a matching pair can be defined with respect to control flow, data flow or both. A control flow matching pair means that a function f would need to be followed by a function f-inverse in EVERY execution path that the program could take. Looking through every execution path is hard (and it is proven that to do it via automated static analysis of programs is an intractable problem) – especially in something like the Linux kernel.
Using the tool that Kothari demonstrated – a call graph was generated and a “query language” defined over call graphs. Looking for matching pairs using the tool became unbelievably simple. This was just one of the things that can be done to reduce the complexity and time taken to figure out what a very complex program was doing.
I think this is a real breakthrough – and I am now a confirmed advocate of program reading. I am hoping to work with Prof Kothari to do some more stuff with this – I hope to share the results if I do end up doing that.
Why do I mention this on this forum? This is something that open source developers and IT Pros have been doing for a long time. Open source developers have a culture wherein a lot of code reading is encouraged. And IT Pro’s have to constantly update and upgrade scripts that they use to control and run their infrastructure. The cultural advantage lies with open source developers and IT Pros but given the complexity of software is increasing exponentially everyone could do with a little help
by MichaelF on November 17, 2006 03:20pm
In the wake of the announcement that Sender ID will be included in the list of technologies covered by the OSP (Open Standards Promise) we spent some time talking with Eric Allman to get his thoughts.
Eric is a pioneer in internet communications and is the co-founder/CSO (Chief SCIENCE Officer) of Sendmail. In this interview Sam and Eric discuss the announcement, Sendmail's history, the recent 25th anniversary of email and more...
by billhilf on November 28, 2006 01:51am
I’m sitting in Istanbul watching ships – big ships with huge loads of containers on board. I recently finished a great book titled The Box, How the Shipping Container Made the World Smaller and the World Economy Bigger, Marc Levinson (2006), a fascinating analysis of the history of the shipping container. It may sound dry, but it’s a very interesting story – particularly of the entrepreneur, Malcom McLean, who challenged the norm and introduced standardized, packaged shipping; transforming commercial shipping from a costly, labor intensive and inefficient system into a booming industry that radically dropped the cost of transporting goods around the world.
If you are in the software or IT industry, it’s near impossible to read this book and not correlate many of the concepts of ‘containerization’ or standardization to the technology world. Almost all containers today that you see on ships, trains or in loading yards, are between 40 ft (12.2 m) and 45 ft (13.7 m). Typically, most are 40-ft and have a capacity of 2 TEU (twenty-foot equivalent units). The reason for the massive change in both transportation and the global economy is because of this simplicity of size – a small set of standard sizes that allowed ships, trucks, receiving bays, and all of the logistical systems related to easily adapt to an industry wide standard.
Prior to containerization, there were major inefficiencies in commercial shipping. Packaging and crating was inconsistent – “breakbulk’ cargo consisted of separate items that had to be handled individually, such as bags of sugar or flour packed next to steel piping. The type and shapes of the shipping vessels were not designed for efficiency: wide at the top and narrow at the bottom. All of this resulted in the large use of manual labor to absorb the inefficiencies in the process. See clip 3 to watch Levinson describe why trade was so difficult prior to containerization.
Standardization of containers resulted in tremendous efficiencies and unlocked the interoperability between the things that transported the containers and the tools, machines and facilities that need to send and receive these containers and their contained goods. How big of a change did this standardization make?
Today, approximately 90% of non-bulk cargo worldwide moves by containers stacked on transport ships. Some 18 million total containers make over 200 million trips per year. For the past ten years, demand for cargo capacity has been growing almost 10% a year. There are ships that can carry over 11,000 TEU ("Emma Mærsk", 1,300 feet long), and designers are working on freighters capable of 14,000 TEU. At some point, container ships will be constrained in size only by the Straits of Malacca, one of the world's busiest shipping lanes. There are plans underway for a quarter-mile-long ship, called the Malacca-Max, targeting a capacity of 18,000 containers (for scope, it would take $1 billion in cargo down with it if it sank).
When I talk with customers, repeatedly they are looking for a way to simplify. They are looking to get more things done (and faster) with less complexity and costs in their environment – science projects are not what they are looking to experiment with. Moreover, the last IT director I talked with is using this as an interviewing tool – testing the interested candidates for their ‘appetite’ for standardization versus highly customized, highly variant systems. Granted, there is always a need for some type of customization and variation, it’s not a question of ‘if’ but of ‘how much’, ‘where’, and ‘why?’ These questions are critical and each needs to have real business value described in the answer. Building a business on a standard platform helps facilitate this simplification and unlocks interoperability for IT. It is an important IT policy that I have employed in the past and many IT leaders I know use this as a core part of the IT strategy.
Standardization in shipping containers created an entire industry – literally grown around this concept of a standard ‘platform’. Beyond the core shipping industry itself (logistics, supply chain, services, ship-to-gound transport, etc.) people are starting to ‘hack’ the shipping container standardization model. Sun announced a data center in a shipping container – interesting proposal, but limited IMHO (maybe they should GPL it J). I find AquaSciences (picture on the left) one of the most fascinating ideas: using containers for fully-contained mobile freshwater generation systems. Quite easy to see the potential of this and it literally could save lives. Of course, there are people building living spaces out of containers, such as Freitag’s ‘skyscrapers’ in Zurich. Here’s a list of all sorts of shipping-container-as-house ideas.
Standardization was the key to success. As I sit here in Istanbul, watching literally hundreds of container ships make their way through the Bosporus from the Black Sea to the Sea of Marmara (which is connected by the Dardanelles to the Aegean Sea, and thereby to the Mediterranean Sea), I can’t help but marvel at how such a simple concept – the standardization of the shipping container – made such a radical and important change to the global economy and our lives. I don’t think it’s circumstantial that standardization made things cheaper and easier (two words Levinson uses in his closing sentence of ‘The Box’). Here’s to great ideas and to getting more work done with less complexity.
1. video interviews with ‘The Box’ author Marc Levinson 2. Industry standards and platform standardization are important in software development as well as the datacenter – for an example see this article on “Using IEEE Standards to Support America’s Army Gaming Development”
by Sam Ramji on November 01, 2006 04:15pm
I got to spend an hour with Ross Mayfield (founder of Socialtext) today. We talked about a range of things, from open source calculators (WikiCalc) to the future of Sharepoint as a platform for open source development and the shift in Microsoft's approach to open source. Ross graciously gave us an extra ten minutes to capture part of our conversation on video.
Socialtext was the first commercial wiki company, built initially on the Perl-based Kwiki and Ross has been a leader in the democratization of this collaboration technology for many years. For more on social computing, you can check out Wikipedia. By now, unless you work with a group of card-carrying Luddites, you have probably even browsed a wiki within your company run by your own IT department or wiki enthusiasts. In this case you’ve already entered the current wave of “enterprise social computing” a.k.a. “Enterprise 2.0”. Our internal wiki at the Open Source Software Lab currently runs on Plone.
About five months ago, he open-sourced part of this platform. The Socialtext Open Source Wiki has the details on this project. It currently runs on PostgreSQL and Ubuntu… but since it’s based on Perl I hope that the porting effort to SQL Server and Windows wouldn’t be too much.
Even more interesting to me given my role at Microsoft is that Socialtext has built a Sharepoint integration ("Socialpoint"). This gives Sharepoint users access to a best-of-breed wiki and blogging engine while retaining presence, Office integration, and a unified portal infrastructure. My inner geek got going when Ross described the new protocol handler they’ve built - "socialpoint:foo/bar" - for navigating within Sharepoint across wikis. I think this is a good example of how Microsoft platform software should be combined with open source applications. We continue to invest in scaling the infrastructure, and open it up to developers for innovative applications that can change as often as customers require.
Ross is thinking about releasing these modules under open source licenses. I’m hopeful that the MS-PL (BSD-like) or MS-CL (Apache-like) will prove worthwhile to him, and that Codeplex will turn out to be the right repository for the work - but regardless of the license or the community site, I'm excited to see that the Microsoft platform and open source software companies can propel great technology into broad adoption - carrying innovation into remote corners of the enterprise.
For more details on the conversation - and to catch Ross' insights first-hand - check out the video interview.
by MichaelF on November 14, 2006 10:13am
Today at IT Forum in Barcelona Microsoft announced the release of Powershell a command line shell and scripting language intended to simplify administration and automation for system administration.
A couple of weeks ago Jeffrey Snover, Powershell Architect, sat down and talked with Sam about Powershell, its evolution and what users can expect from this powerful solution.
Powershell can be downloaded at: http://www.microsoft.com/powershell/download
Updated: See a demo on Channel 9
by Bryan Kirschner on November 07, 2006 06:51pm
I know I’m running a risk of losing focus on the thread I started on analogy and metaphor, but there’ve been too many things popping up in the last couple weeks. In the interest of focus (and maybe good taste) I decided not to follow up There’s a Vendor in My OSS with “And Now There’s an Oracle in My Pudding” for now. (Yes, that is the title that popped into my head as soon as a I read the news. No, I won’t promise I won’t go there in a future blog.) But I have got to introduce Tammara Combs Turner and the cool stuff she works on, because (a) she just won a national award and (b) she once worked for me. (I claim no contribution whatsoever to the award: I’m simply relieved it is prima facie evidence I didn’t slow her down: in addition to her work and her award, she’s a PhD candidate, an active mentor, and raising two young kids…while I can’t even keep my office clean.)
Tammara is a Program Manager (which doesn’t tell you anything—read Vlad’s entertaining perspective on this, which I share) in the Microsoft Research Community Technologies Group. These folks study “computer mediated collective action”—which, as you might imagine, means we talk a lot about things like peer-to-peer technical support and distributed software development….
One of the technologies they offer—free, online, for anybody to use-is NetScan. NetScan is a an analysis and visualization technology for UseNet newsgroups. Below is the UseNet. All of it. Groups are shown as blocks of various sizes while colors indicate growth. It’s a great tool for social network analysis, not to mention idle geek-out amusement (hm, what’s the posting trendline on alt.politics.bush these days?). She wrote a paper all about it here (Picturing Usenet: Mapping Computer-Mediated Collective Action).
Tammara was willing to take some time out of her busy schedule and come to my mess of an office to do an interview with us…
by MichaelF on November 13, 2006 10:39am
While we were in San Jose for Zendcon we had the opportunity to spend some time with Jacob Taylor, CTO of Sugar CRM. As you may remember, Microsoft and Sugar announced a technical collaboration in February focused on improving the experience of deploying and running Sugar on Windows. Sugar also decided to release the a new Sugar Suite distribution under the Ms-CL (Microsoft Community License) as part of the Shared Source Program.
In this interview Sam and Jacob discuss the path that led Jacob to being co-founder and CTO of Sugar CRM as well as what has been happening at Sugar since February. We also get Jacob's thoughts on the commercial open-source model that Sugar pioneered and the collaboration announced by Microsoft and Zend which has an impact on Sugar and its customers.
by MichaelF on November 30, 2006 01:31pm
Brad Abrams, Group Program Manager for the .NET Framework, sits down with Sam to discuss all things AJAX including: the Open AJAX Alliance, Atlas, the Microsoft AJAX Library and cross browser compatability. Brad also does a quick demo near the end of the video.
ASP.NET AJAX homepage: http://ajax.asp.net Shanku Nyogi's (Product Unit Manager for UI Framework and Services Team) blog: http://www.shankun.com/Atlas_Php_2.aspx Brad's Blog: http://blogs.msdn.com/brada