Follow Us on Twitter
by MichaelF on December 05, 2006 06:36pm
Today Microsoft announced the availability of the Windows Unified Data Storage Server 2003 (WUDSS). Built for mixed environments this solution is highly interoperable, without loss of performance in both NFS and CIFS environments. Remote administration is also provided both through Active X RDP and Java RDP Client Support.
This sounded interesting so we sent Hank to talk with Tres Hill, Sr Product Manager, to find out why this announcement is significant for IT professionals running heterogeneous environments.
by anandeep on December 05, 2006 03:33pm
There's two things people figure out about me (mainly because I tell them!) - one that I am crazy about airplanes and two that I love stirring controversy! And in this blog I get an opportunity to bring those two favorite things together.
There are two kinds of light or General Aviation airplanes out there - the "production/certified" airplanes (referred to as "Spam Cans") and "homebuilt/experimental" airplanes.
You probably have heard of the manufacturers of the "Spam Cans" - they have names like Cessna, Piper and Beechcraft. These are large companies with lots of engineers who mass produce airplanes and sell them to you if you part with large sums of money. They also give them to you in any color as long as it it's creamish. These are faithful, reliable if boring airplanes. Nothing wrong with them but they are not fun. They also make a lot of compromises in speed, manueverability, weight carrying ability or runaway length requirements - and usually don't excel in any of those criteria. They are the airplanes every commercial enterprise uses though. Almost everybody learns to fly in them. Some of them are bush planes in Alaska and Africa and are the lifeline of a lot of people there - nothing to sneeze about! Below is a picture of a Cessna 150 (my ex-airplane) that I used to build up hours and tour the Pacific Northwest. One of the "Spam Cans" but beloved nevertheless.
Then there are people who accept no compromises. They decided they didn't want to accept hired engineer’s opinion of the best design. They went to work designing their own planes and then offering plans or kits so that other people could build them. One of the early pioneers of this was Burt Rutan, now famous as the designer of the first private spaceplane "Spaceship One", who offered a kit for an airplane christened "VariViggen" that had its tailplanes in front (in a configuration called a "canard"). It could go faster than any production plane on much less power and was stall proof- which meant that it was a lot safer than the regular planes. The other success story is Richard VanGrunsven - whose company "Vans Aircraft" has built a family of aircraft called "RV"s (there is still some debate as to whether that means "Recreational Vehicle" or "Richard VanGrunsven"). As of the time of writing there were 4861 RVs built and flying - more RVs ship every year than any commercial light plane manufacturer in the world can produce! These aircraft are speedier, more manueverable, have better weight carrying ability or have less runaway length requirements than comparable production aircraft with the same horsepower. These planes are known as “homebuilts” or “amateur-built” or “experimental”. The “experimental” title comes from the placard that they have to exhibit by law – this is also the placard all manufactured planes and military aircraft have to exhibit till the time they get certified by the FAA. It doesn’t necessarily mean that the aircraft is an experiment in progress.
Ok -where's the controversy? I am saying that the Open Source Software movement is like the experimental aircraft movement and make an assertion that commercial software companies are like production aircraft companies.
After all there is a community among experimental builders that rivals the OSS community. They share ideas freely, give each other plans for improvements and are very loyal and committed to the cause. One instance of such a community is "Van's Air Force". These communities and the experimental manufacturers also are on the cutting edge of technology, pioneering cheap "all glass" computer screen instrumentation in light planes among other things. Like Linux, most successful experimental aircraft have a solid “kernel” that is built and maintained one way but like Linux the “distributions” abound based on builder’s personal preferences. For instance from the very successful Vans RV-4 came the Harmon Rocket. Ubuntu and RedHat aren’t THAT different! J
But its not all applehood and mother pie. Building these airplanes (even if you do all the work yourself) is not much cheaper than buying a general aviation airplane. You do have to build them to get all the advantages and it is considered a truism in the community "build if you like to build, buy if you like to fly!". Which means that it takes serious commitment to build one of these things and you better take a lot of pleasure in just the act of building. Of course, you could buy one of these already built, but would you trust the builder? Build quality is very variable! Certification standards are conservative and lengthy for a reason - a small variation can result in a catastrophic outcome. These aircraft are also more demanding to fly than the boring old "Spam Can".
I fly a Cessna 182 for the Civil Air Patrol - and I wouldn't want to fly an experimental airplane that I myself hadn't built. (Even if I built one - would I?). Because we fly in the mountains with heavy loads (survival gear, direction finding equipment, and individuals who are - shall we say - "weight challenged"). I know it won't do things spectacularly but will do its standard thing as long as I follow the manual. Its heavy on the controls, isn't that fast and has a high fuel consumption - but it can carry a heavy load and land in a reasonable distance. And I can be sure that all the improvements that Cessna has mandated have been incorporated, since it would be illegal not to. Not so for the experimentals since the builder not the kit manufacturer is the legal manufacturer and can make his own decisions! It isn’t the “experimental” placard that scares me, its’ the fact that I would have to form a judgment on my own on every INSTANCE of what is fundamentally the same design.
Am I taking the analogy too far? To be truthful, I don't know - but it is certainly worth thinking about!
Now if you send e-mail to Sam Ramji telling him how much you liked this blog - I might be able to afford a house in the Puget Sound Area and this RV-8 kit that I want at the same time! :-)
by kishi on December 01, 2006 04:21pm
I started the first HPC blog (See “previous blog“) with an understanding that HPC is an area where there has been a surge of activity from a development/investment standpoint. This segment of Information Technology has experienced a heightened level of engagement from OEM’s and partners, all trying to meet the growing computing needs of their customers. So after getting a basic understanding behind the importance of why HPC matters, the next logical step that needed uncovering was “How to think” about HPC Infrastructure and tap into the “wisdom” behind managing it. You might ask why this is relevant. For starters, setting up HPC Infrastructure is an experience that, just like any other infrastructure, be it Network or Storage, requires intricate planning and intimate familiarity with its individual contributing components. In case of HPC, let’s just say you really need to know your nodes J. Let’s talk more about what’s involved in setting up an HPC Infrastructure and how to think about it as a whole:
1. Investment Impetus: To successfully plan and design an HPC Infrastructure, the first and foremost step should be to “look beneath the surface” . This simply means to understand, the primary reason for investing in HPC. The demand for HPC equipment, linked to a set of business objectives should have clear purpose around the outcome and expectation. This is specially true today than at any other moment in time because the consumption of HPC cycles, specifically in the research and development areas across all verticals has seen a steady 70% growth over the past four years (Source: primeur ). Despite this tremendous growth in the proliferation of HPC technology, the growth pattern itself is sporadic. One of the reasons for it may be the complexity, not only in terms of design but also in terms of consumption as well. Take the case of SHARCNET in Southern Ontario that developed a long range plan around adoption and implementation of HPC technology. According to the report, some of the elementary challenges around planning for HPC emerge from the fact that “it is an enabling technology for an extremely diverse set of researchers”. This embodies the essence of the sentiment behind the complexity and diversity predominant in the HPC space.
2. Planning and Designing Hardware: While thinking about planning and designing an HPC infrastructure implementation, I spoke to several folks in this area, drew from a decade and a half of my experience as an Infrastructure Architect and thought of some key areas that I would consider. These include:
a. Facility considerations (Rackspace, Power and Cooling): Talk to any enterprise level Datacenter manager what his/her top 10 pain-points are and you are bound to hear the words “rackspace, power and cooling” in what follows. Dig deeper and you’ll realize that in any datacenter, there’s a fixed number of colo’s (Colocation) you can populate based on the HVAC designs. This means that rackspace is what’s at a premium in each of these colo’s with every “u” accounted for. Packing in dense chipsets in small form-factor server add to existing power and cooling challenges Translation – you need more outlets and more airflow per rack than what you did a decade ago with a handful of 4 and 5u servers taking up the entire rack b. Physical Plant planning: Quoting the resident HPC Guru Frank Chism who says “I cannot over emphasize the importance to planning for physical plant in HPC deployments. Things like room and raceways for well managed and planned cabling. HPC uses more cable than anything except maybe SAN. Also, pay attention to floor loads, air flow, clean and redundant power. Finally, never never forget out-of-band management. Deep subfloor really helps with all that cabling”. Translation – Effective HPC performance calls for an effective HPC design, which includes tweaking hard as well as soft components. These components can be as covert as chip-design or as overt as subfloor depth. c. Hardware and Processing Power: Pushing the envelope on hardware and processor architectures today translates to increased performance (the heart and soul of HPC). Adding energy efficient hardware on top of the architecture amounts to greater investment in raw computing power, which in turn translates to building a sound HPC infrastructure. The key advantages one needs to look for in this scenario are faster data access and increased instructions. The word “performance” is repeated throughout the theme of this topic because it IS what HPC is all about, the ability to reduce the number of cycles to process data. Addressing the hardware and processing specs as part of core requirements ensures a smoother build-out.
a. Facility considerations (Rackspace, Power and Cooling): Talk to any enterprise level Datacenter manager what his/her top 10 pain-points are and you are bound to hear the words “rackspace, power and cooling” in what follows. Dig deeper and you’ll realize that in any datacenter, there’s a fixed number of colo’s (Colocation) you can populate based on the HVAC designs. This means that rackspace is what’s at a premium in each of these colo’s with every “u” accounted for. Packing in dense chipsets in small form-factor server add to existing power and cooling challenges
Translation – you need more outlets and more airflow per rack than what you did a decade ago with a handful of 4 and 5u servers taking up the entire rack
b. Physical Plant planning: Quoting the resident HPC Guru Frank Chism who says “I cannot over emphasize the importance to planning for physical plant in HPC deployments. Things like room and raceways for well managed and planned cabling. HPC uses more cable than anything except maybe SAN. Also, pay attention to floor loads, air flow, clean and redundant power. Finally, never never forget out-of-band management. Deep subfloor really helps with all that cabling”.
Translation – Effective HPC performance calls for an effective HPC design, which includes tweaking hard as well as soft components. These components can be as covert as chip-design or as overt as subfloor depth.
c. Hardware and Processing Power: Pushing the envelope on hardware and processor architectures today translates to increased performance (the heart and soul of HPC). Adding energy efficient hardware on top of the architecture amounts to greater investment in raw computing power, which in turn translates to building a sound HPC infrastructure. The key advantages one needs to look for in this scenario are faster data access and increased instructions. The word “performance” is repeated throughout the theme of this topic because it IS what HPC is all about, the ability to reduce the number of cycles to process data. Addressing the hardware and processing specs as part of core requirements ensures a smoother build-out.
3. Implementing HPC Tools and Software: Like any other piece of hardware, a HPC cluster is just that until software and tools exploit the underlying architecture to drive results and performance to do what it does best – compute. When thinking of some core elements of HPC tools and software, here’s how I thought to break them up:
a. Setup and deployment systems: Setting up HPC clusters goes back to what I said earlier in Section 1 – what do you want to do with it? Although there are various ways and methods that allow you to drive the software and installation experience of an HPC system, the bottom line is that this depends to a great extent of what components make up the genetic composition of the HPC cluster you ordered. Taking a look at some HPC software setup and deployment tools out there, a few mainstream ones are SCALI and HP-MPI (HP’s message passing interface). These packages provide deployment, monitoring and job scheduling services for managing and administering an HPC cluster just like IBM’s CSM (Cluster Systems Manager). In the Open Source space, there’s Maui and Torque, that work as job scheduler and resource managers for managing compute nodes and clusters. Platform Rocks is another suite of utilities that allow installation and integration of third party apps b. Parallel FS: This is truly what I think is going to be the frontier for some intense activity over the next few years. Using Wikipedia’s description, “Distributed parallel file systems stripe data over multiple servers for high performance. Some of the distributed parallel file systems use object storage device (OSD) (In Lustre called OST) for chunks of data together with centralized metadata servers such as Ceph Scalable, Distributed File System from University of California, Santa Cruz. (Fault-tolerance in their roadmap.), Lustre from Cluster File Systems. (Lustre has failover, but multi-server RAID1 or RAID5 is still in their roadmap for future versions.) and Parallel Virtual File System (PVFS, PVFS2)”.
a. Setup and deployment systems: Setting up HPC clusters goes back to what I said earlier in Section 1 – what do you want to do with it? Although there are various ways and methods that allow you to drive the software and installation experience of an HPC system, the bottom line is that this depends to a great extent of what components make up the genetic composition of the HPC cluster you ordered. Taking a look at some HPC software setup and deployment tools out there, a few mainstream ones are SCALI and HP-MPI (HP’s message passing interface). These packages provide deployment, monitoring and job scheduling services for managing and administering an HPC cluster just like IBM’s CSM (Cluster Systems Manager). In the Open Source space, there’s Maui and Torque, that work as job scheduler and resource managers for managing compute nodes and clusters. Platform Rocks is another suite of utilities that allow installation and integration of third party apps
b. Parallel FS: This is truly what I think is going to be the frontier for some intense activity over the next few years. Using Wikipedia’s description, “Distributed parallel file systems stripe data over multiple servers for high performance. Some of the distributed parallel file systems use object storage device (OSD) (In Lustre called OST) for chunks of data together with centralized metadata servers such as Ceph Scalable, Distributed File System from University of California, Santa Cruz. (Fault-tolerance in their roadmap.), Lustre from Cluster File Systems. (Lustre has failover, but multi-server RAID1 or RAID5 is still in their roadmap for future versions.) and Parallel Virtual File System (PVFS, PVFS2)”.
Deep-Dive: At Base, parallel file systems are global namespaces for files that achieve high bandwidth via parallelism. That bandwidth comes in three dimensions, high aggregate bandwidth, high single stream bandwidth, and high metadata operations per second. No one seems to have achieved high performance in all of these dimensions. Don’t forget that the volumes of data are so large that backup is a major undertaking and thus, reliability is required as well. Further, nobody seems to be able to make a parallel file system that performance well for high-speed data for short I/Os, like say you do when compiling a major application c. Multiple Networks: A final comment on implementation of HPC is that HPC often has multiple networks. For example, it does little good to have a parallel file system that delivers gigabytes per second of data to single nodes if the network can’t handle that much bandwidth!
Deep-Dive: At Base, parallel file systems are global namespaces for files that achieve high bandwidth via parallelism. That bandwidth comes in three dimensions, high aggregate bandwidth, high single stream bandwidth, and high metadata operations per second. No one seems to have achieved high performance in all of these dimensions. Don’t forget that the volumes of data are so large that backup is a major undertaking and thus, reliability is required as well. Further, nobody seems to be able to make a parallel file system that performance well for high-speed data for short I/Os, like say you do when compiling a major application
c. Multiple Networks: A final comment on implementation of HPC is that HPC often has multiple networks. For example, it does little good to have a parallel file system that delivers gigabytes per second of data to single nodes if the network can’t handle that much bandwidth!
So in conclusion, here’s a recap on the learning behind setting up HPC Infrastructure:
Look forward to getting back to you with more on HPC over the new few weeks again. Until then “Happy Computing”!!
by MichaelF on November 30, 2006 01:31pm
Brad Abrams, Group Program Manager for the .NET Framework, sits down with Sam to discuss all things AJAX including: the Open AJAX Alliance, Atlas, the Microsoft AJAX Library and cross browser compatability. Brad also does a quick demo near the end of the video.
ASP.NET AJAX homepage: http://ajax.asp.net Shanku Nyogi's (Product Unit Manager for UI Framework and Services Team) blog: http://www.shankun.com/Atlas_Php_2.aspx Brad's Blog: http://blogs.msdn.com/brada
by billhilf on November 28, 2006 01:51am
I’m sitting in Istanbul watching ships – big ships with huge loads of containers on board. I recently finished a great book titled The Box, How the Shipping Container Made the World Smaller and the World Economy Bigger, Marc Levinson (2006), a fascinating analysis of the history of the shipping container. It may sound dry, but it’s a very interesting story – particularly of the entrepreneur, Malcom McLean, who challenged the norm and introduced standardized, packaged shipping; transforming commercial shipping from a costly, labor intensive and inefficient system into a booming industry that radically dropped the cost of transporting goods around the world.
If you are in the software or IT industry, it’s near impossible to read this book and not correlate many of the concepts of ‘containerization’ or standardization to the technology world. Almost all containers today that you see on ships, trains or in loading yards, are between 40 ft (12.2 m) and 45 ft (13.7 m). Typically, most are 40-ft and have a capacity of 2 TEU (twenty-foot equivalent units). The reason for the massive change in both transportation and the global economy is because of this simplicity of size – a small set of standard sizes that allowed ships, trucks, receiving bays, and all of the logistical systems related to easily adapt to an industry wide standard.
Prior to containerization, there were major inefficiencies in commercial shipping. Packaging and crating was inconsistent – “breakbulk’ cargo consisted of separate items that had to be handled individually, such as bags of sugar or flour packed next to steel piping. The type and shapes of the shipping vessels were not designed for efficiency: wide at the top and narrow at the bottom. All of this resulted in the large use of manual labor to absorb the inefficiencies in the process. See clip 3 to watch Levinson describe why trade was so difficult prior to containerization.
Standardization of containers resulted in tremendous efficiencies and unlocked the interoperability between the things that transported the containers and the tools, machines and facilities that need to send and receive these containers and their contained goods. How big of a change did this standardization make?
Today, approximately 90% of non-bulk cargo worldwide moves by containers stacked on transport ships. Some 18 million total containers make over 200 million trips per year. For the past ten years, demand for cargo capacity has been growing almost 10% a year. There are ships that can carry over 11,000 TEU ("Emma Mærsk", 1,300 feet long), and designers are working on freighters capable of 14,000 TEU. At some point, container ships will be constrained in size only by the Straits of Malacca, one of the world's busiest shipping lanes. There are plans underway for a quarter-mile-long ship, called the Malacca-Max, targeting a capacity of 18,000 containers (for scope, it would take $1 billion in cargo down with it if it sank).
When I talk with customers, repeatedly they are looking for a way to simplify. They are looking to get more things done (and faster) with less complexity and costs in their environment – science projects are not what they are looking to experiment with. Moreover, the last IT director I talked with is using this as an interviewing tool – testing the interested candidates for their ‘appetite’ for standardization versus highly customized, highly variant systems. Granted, there is always a need for some type of customization and variation, it’s not a question of ‘if’ but of ‘how much’, ‘where’, and ‘why?’ These questions are critical and each needs to have real business value described in the answer. Building a business on a standard platform helps facilitate this simplification and unlocks interoperability for IT. It is an important IT policy that I have employed in the past and many IT leaders I know use this as a core part of the IT strategy.
Standardization in shipping containers created an entire industry – literally grown around this concept of a standard ‘platform’. Beyond the core shipping industry itself (logistics, supply chain, services, ship-to-gound transport, etc.) people are starting to ‘hack’ the shipping container standardization model. Sun announced a data center in a shipping container – interesting proposal, but limited IMHO (maybe they should GPL it J). I find AquaSciences (picture on the left) one of the most fascinating ideas: using containers for fully-contained mobile freshwater generation systems. Quite easy to see the potential of this and it literally could save lives. Of course, there are people building living spaces out of containers, such as Freitag’s ‘skyscrapers’ in Zurich. Here’s a list of all sorts of shipping-container-as-house ideas.
Standardization was the key to success. As I sit here in Istanbul, watching literally hundreds of container ships make their way through the Bosporus from the Black Sea to the Sea of Marmara (which is connected by the Dardanelles to the Aegean Sea, and thereby to the Mediterranean Sea), I can’t help but marvel at how such a simple concept – the standardization of the shipping container – made such a radical and important change to the global economy and our lives. I don’t think it’s circumstantial that standardization made things cheaper and easier (two words Levinson uses in his closing sentence of ‘The Box’). Here’s to great ideas and to getting more work done with less complexity.
1. video interviews with ‘The Box’ author Marc Levinson 2. Industry standards and platform standardization are important in software development as well as the datacenter – for an example see this article on “Using IEEE Standards to Support America’s Army Gaming Development”
by MichaelF on November 17, 2006 03:20pm
In the wake of the announcement that Sender ID will be included in the list of technologies covered by the OSP (Open Standards Promise) we spent some time talking with Eric Allman to get his thoughts.
Eric is a pioneer in internet communications and is the co-founder/CSO (Chief SCIENCE Officer) of Sendmail. In this interview Sam and Eric discuss the announcement, Sendmail's history, the recent 25th anniversary of email and more...
by anandeep on November 16, 2006 07:39pm
One of the great things about my job at the Open Source Software Lab (OSSL) here at Microsoft (besides being able to work with both Linux and Windows!) is that I get to go computer science research conferences. I try not to attend the purely academic ones, but the ones in which both industry and academic research issues are addressed.
I just got back from ISSRE (pronounced “is-ree”) i.e. the 17th IEEE International Symposium on Software Reliability Engineering, 2006. This conference talks about everything that impacts the reliability of computers – this includes everything from “drivers of reliability” to “testing to ensure reliability” to “doing static analysis of programs”.
Skeptical that anything they talk about here would be useful to y’all? Well, think again! They have all kinds of practical advice on doing things right. The talks I really enjoyed included
Only one of the above talks was from an academic institution, the other two were based on experience with software being widely used in the consumer and application server space.
The one thing that I enjoyed the most was a tutorial on “Software Productivity and Reliability – Tools and Techniques” given by Prof S C Kothari of Iowa State University. The tutorial title is appropriate but I think what it should have been is “Learn to Read Programs Properly!”
Kothari believes that a lot of attention has been paid to what he calls “Program Writing” – developers tools and such. This has resulted in the creation of very complex software artifacts. Most real world applications today are built on these already built complex software systems.
The problem is that almost all academic institutions and programs focus on the inventive aspects of programming. This means that they teach algorithms and techniques assuming that everything will be written from scratch. Real life is of course never like this – it is difficult if not impossible to be a computer software professional these days and work just with your own code. More often than not, most developers have to wade through other people’s code to understand, use or modify it. Developing software today involves a lot more than just writing it.
The skills to “read programs” are acquired the hard way – and sometimes never fully mastered. Kothari suggested that there needs to be an emphasis on program reading in training and that tools need to be built to aid in reading programs and forming the proper mental model of them. The barrier to future software productivity is not machines or algorithms but human mastery of the complexity of the vast amount of critical software out there.
Program reading is not easy, as most people in open source know! This is due to
There are some tools that are available to assist in program reading such as CScope (BTW Hank Janssen of our lab wrote parts of CScope) but there has not been a lot of attention paid to WHAT program reading needs in order to address the complexity issues raised above. Kothari has a company Ensoft that provides some very cool tools to do the kinds of things that are needed for reading complex programs. The tools are based on abstractions that are used in program comprehension (there is a IEEE Conference on Program Comprehension held every year). Kothari illustrated one that he called “matching pair” (MP). Matching pairs are defined by a syntactic pattern – which could be artifacts (such as matching parentheses) or events ( such as locking or unlocking a resource). There are many types of such matching pairs and to make a program correct a matching pair can be defined with respect to control flow, data flow or both. A control flow matching pair means that a function f would need to be followed by a function f-inverse in EVERY execution path that the program could take. Looking through every execution path is hard (and it is proven that to do it via automated static analysis of programs is an intractable problem) – especially in something like the Linux kernel.
Using the tool that Kothari demonstrated – a call graph was generated and a “query language” defined over call graphs. Looking for matching pairs using the tool became unbelievably simple. This was just one of the things that can be done to reduce the complexity and time taken to figure out what a very complex program was doing.
I think this is a real breakthrough – and I am now a confirmed advocate of program reading. I am hoping to work with Prof Kothari to do some more stuff with this – I hope to share the results if I do end up doing that.
Why do I mention this on this forum? This is something that open source developers and IT Pros have been doing for a long time. Open source developers have a culture wherein a lot of code reading is encouraged. And IT Pro’s have to constantly update and upgrade scripts that they use to control and run their infrastructure. The cultural advantage lies with open source developers and IT Pros but given the complexity of software is increasing exponentially everyone could do with a little help
by MichaelF on November 14, 2006 10:13am
Today at IT Forum in Barcelona Microsoft announced the release of Powershell a command line shell and scripting language intended to simplify administration and automation for system administration.
A couple of weeks ago Jeffrey Snover, Powershell Architect, sat down and talked with Sam about Powershell, its evolution and what users can expect from this powerful solution.
Powershell can be downloaded at: http://www.microsoft.com/powershell/download
Updated: See a demo on Channel 9
by MichaelF on November 13, 2006 10:39am
While we were in San Jose for Zendcon we had the opportunity to spend some time with Jacob Taylor, CTO of Sugar CRM. As you may remember, Microsoft and Sugar announced a technical collaboration in February focused on improving the experience of deploying and running Sugar on Windows. Sugar also decided to release the a new Sugar Suite distribution under the Ms-CL (Microsoft Community License) as part of the Shared Source Program.
In this interview Sam and Jacob discuss the path that led Jacob to being co-founder and CTO of Sugar CRM as well as what has been happening at Sugar since February. We also get Jacob's thoughts on the commercial open-source model that Sugar pioneered and the collaboration announced by Microsoft and Zend which has an impact on Sugar and its customers.
by Bryan Kirschner on November 07, 2006 06:51pm
I know I’m running a risk of losing focus on the thread I started on analogy and metaphor, but there’ve been too many things popping up in the last couple weeks. In the interest of focus (and maybe good taste) I decided not to follow up There’s a Vendor in My OSS with “And Now There’s an Oracle in My Pudding” for now. (Yes, that is the title that popped into my head as soon as a I read the news. No, I won’t promise I won’t go there in a future blog.) But I have got to introduce Tammara Combs Turner and the cool stuff she works on, because (a) she just won a national award and (b) she once worked for me. (I claim no contribution whatsoever to the award: I’m simply relieved it is prima facie evidence I didn’t slow her down: in addition to her work and her award, she’s a PhD candidate, an active mentor, and raising two young kids…while I can’t even keep my office clean.)
Tammara is a Program Manager (which doesn’t tell you anything—read Vlad’s entertaining perspective on this, which I share) in the Microsoft Research Community Technologies Group. These folks study “computer mediated collective action”—which, as you might imagine, means we talk a lot about things like peer-to-peer technical support and distributed software development….
One of the technologies they offer—free, online, for anybody to use-is NetScan. NetScan is a an analysis and visualization technology for UseNet newsgroups. Below is the UseNet. All of it. Groups are shown as blocks of various sizes while colors indicate growth. It’s a great tool for social network analysis, not to mention idle geek-out amusement (hm, what’s the posting trendline on alt.politics.bush these days?). She wrote a paper all about it here (Picturing Usenet: Mapping Computer-Mediated Collective Action).
Tammara was willing to take some time out of her busy schedule and come to my mess of an office to do an interview with us…
by Sam Ramji on November 01, 2006 04:15pm
I got to spend an hour with Ross Mayfield (founder of Socialtext) today. We talked about a range of things, from open source calculators (WikiCalc) to the future of Sharepoint as a platform for open source development and the shift in Microsoft's approach to open source. Ross graciously gave us an extra ten minutes to capture part of our conversation on video.
Socialtext was the first commercial wiki company, built initially on the Perl-based Kwiki and Ross has been a leader in the democratization of this collaboration technology for many years. For more on social computing, you can check out Wikipedia. By now, unless you work with a group of card-carrying Luddites, you have probably even browsed a wiki within your company run by your own IT department or wiki enthusiasts. In this case you’ve already entered the current wave of “enterprise social computing” a.k.a. “Enterprise 2.0”. Our internal wiki at the Open Source Software Lab currently runs on Plone.
About five months ago, he open-sourced part of this platform. The Socialtext Open Source Wiki has the details on this project. It currently runs on PostgreSQL and Ubuntu… but since it’s based on Perl I hope that the porting effort to SQL Server and Windows wouldn’t be too much.
Even more interesting to me given my role at Microsoft is that Socialtext has built a Sharepoint integration ("Socialpoint"). This gives Sharepoint users access to a best-of-breed wiki and blogging engine while retaining presence, Office integration, and a unified portal infrastructure. My inner geek got going when Ross described the new protocol handler they’ve built - "socialpoint:foo/bar" - for navigating within Sharepoint across wikis. I think this is a good example of how Microsoft platform software should be combined with open source applications. We continue to invest in scaling the infrastructure, and open it up to developers for innovative applications that can change as often as customers require.
Ross is thinking about releasing these modules under open source licenses. I’m hopeful that the MS-PL (BSD-like) or MS-CL (Apache-like) will prove worthwhile to him, and that Codeplex will turn out to be the right repository for the work - but regardless of the license or the community site, I'm excited to see that the Microsoft platform and open source software companies can propel great technology into broad adoption - carrying innovation into remote corners of the enterprise.
For more details on the conversation - and to catch Ross' insights first-hand - check out the video interview.
by MichaelF on October 31, 2006 08:31am
Today at Zendcon Bill Hilf and Andi Gutmans announced a new technical collaboration aimed at improving the performance of PHP on Windows both for IIS 6 as well as in the future including IIS 7 on Longhorn. The partnership has multiple components including Microsoft releasing a FastCGI add-on for IIS and Zend will establish a Windows testing lab to ensure high performance on an ongoing basis. In the end the winners are the millions of PHP developers and hosters who want an viable choice in operating systems.
In this interview Sam and Andi discuss how Zend came to be as well as the details of the announcement. Andi provides his perspective on what this collaboration will mean to the PHP community and has a question of his own for Sam.
by billhilf on October 31, 2006 08:30am
Today I had the opportunity to attend and present at the Zend/PHP Conference and Expo in San Jose. I was here to announce a technical collaboration between Zend and Microsoft that will improve the performance of PHP on Windows Server (and down the road Longhorn Server). You can read the specific details of the collaboration here.
As part of my presentation, we performed a demo showing a before and after scenario. We first showed PHP running on Windows Server before the enhancements- then again after. On the latter, I’m pleased to say that we consistently achieved 100% performance gains and on some applications a 150% improvement. All treats, no tricks…that’s right 100-150%.
So what are we doing exactly?
PHP, being the third most popular development language today, is an important addition to the options available to developers who want to leverage the Windows Server platform, and the overall Microsoft ecosystem. According to Zend’s internal statistics, the majority of PHP developers already use Windows as their application development platform – over 70%. Improving their experience with running PHP applications on Windows for production is a natural next step.
In the end, we believe the real winners in this technical collaboration are PHP developers who now have viable options when thinking about platform of choice for their PHP applications. Of course, that means our mutual customers benefit from PHP applications as well as the choice of technologies that best suit their needs. To make sure this not only happens today, but going forward, both Microsoft and Zend will be active participants in the PHP community to ensure open communications and a continually improving experience for PHP developers in Microsoft environments.
As I have said before, interoperability does not happen by accident. This announcement is the result of a lot of hard work by people from both Microsoft and Zend. I personally want to thank Andi Gutmans, CTO and co-founder of Zend Technologies. Andi and I have been in discussions for a long while now and I’m very happy to see the great results from a conversation started long ago. We have been working with Zend over the past 6 months to put all of the pieces in place. From the technical work on FastCGI to helping Zend create an engineering lab to improve PHP as it develops over time. This is the first step in an ongoing relationship.
And for those of you in North America, Happy Halloween!
by anandeep on October 27, 2006 12:27pm
I loved doing development in a research and university environment. You got to write cool code, prove new ideas, break new ground and generally ended up with bragging rights to say “I did an image recognition algorithm on a multi-layer architecture implementing reactive and planning parallelism on an autonomous robot!” The code had to work on your workstation or maybe on a demo machine once. Once you wrote the code, the only people who touched the system were hapless graduate students implementing the next big idea. They had to come to you and you could then dazzle them with your insight! This was “sexy development”!
When I moved to industry and wrote software for day to day use – things changed. Now you had all those people with “manager” titles telling you what to do, and those people called “testers” who told you why your code sucked (you couldn’t logically argue your way out of that because the weasels usually had proof)!. Of course being consummate professionals you adapted. You got the religion of “bullet proof code” and worked on making sure the testers only had “fit and finish” bugs filed against you. Which the intern could work on. That was still fun - a different challenge maybe not as “pure” as designing a neat new algorithm but pretty good nevertheless!
You got past the testers but when they integrated the components that you had bullet-proofed to run end-to-end or user acceptance tests, unexpected stuff happened. Who would have thought that they would configure the machine that way or that another non-surface component could pass you null strings. Now you had to plan not only for the testers – but also for other developers and those pesky sys admin guys. How did they become sys admins? They couldn’t tell a polynomial solution from a log n solution anyway! But being nothing if not adaptable you adapted. You now built bullet proof AND idiot proof code. (My father, a military pilot and flight instructor, when teaching flight safety used to say “Nothing is foolproof because fools are so ingenious!”). It got a little boring at times but you still had the satisfaction of building something that was “engineered”.
I thought I had shipped the product but I found couldn’t sit back and relax. The support guys were making insinuations against my code. It didn’t work they said – and you hadn’t put in the right level of granularity in the logs for them to do a diagnosis. This had nothing to do with Computer Science – any bozo could write stuff to the log. Why didn’t the intern do it? What do you mean he can’t make sense of my code? Yeah, I do know my code best. I guess it’s the right thing to do. Certainly not as fun as designing, bullet proofing and idiot proofing new code but good supportability is “sine qua non” for a well done project!
Is that the end of it? No, further design and coding needs to be done for making software more manageable, to make the logs more systematic, to make sure that the product works when its deployed to multiple configurations, that it performs well and fails gracefully.
Unless you specialize in a certain aspect of manageability, reliability or diagnosis – this is not “sexy” development. I probably wouldn’t get as much satisfaction from designing event logs as I would from designing a new search algorithm.
I was getting paid to do all this (ok, so it was my own startup but I was getting paid in VC money!) and it was still very hard. We did do it but it took lots of coaxing of our developers to pay attention to this. They all preferred to work on the next release that had all the sexy features. Even though they knew that to make the startup successful and still have a job, the unsexy stuff needed to be done and done RIGHT!
When you are working for the “love of the game” and not money, like in Open Source – who coaxes you? Who does the unsexy stuff? Are there enough people who specialize in the esoteric aspects of event logs, that this is not a problem? Or do users who need the feature “just do it” and add the code to the community version? Or are things slipping through the cracks?
I did a sweep of the usual suspect Linux developer mailing lists and found that there is concern about whether unsexy stuff gets done. Here is a typical comment that I saw
“I think that the only issue with Open Source boils down to this:
The things that nobody wants to do, but somebody has to.
Nobody wants to think about documentation. Or user interfaces. These things are hard, tedious, and a hell of a lot more boring than actually coming up with stuff to "make things work".” (from here)
Documentation is famously one of those things that is considered “unsexy” (well, ok in commercial software too). There are efforts like Grokdoc to make documentation of Open Source projects sexy by making it a priority. But the “who does unsexy?” issue is a real concern in Open Source.
We ran into a similar issue with event logs. You know the text stuff you write so that you can find out later what happened. At the lab we just did an investigation of whether we could tell if one of our boxes had crashed from the syslog and from console messages. We were a little taken aback by how many times we couldn’t tell what states the machine had gone through.
On doing some investigation we found that the most influential project that was addressing this issue, the Evlog project (most supported by IBM) has been quiet since 2004. This code is used internally within IBM but was not mainstreamed into the Linux kernel.
How does one get unsexy stuff like this into the Linux kernel so that is comparable to UNIX/VMS/Windows?
I contend that it is critical to Open Source that attention be paid to the event logs. They are critical in making any operating systems reliable. VMS/UNIX/Windows all went through the process of making their event logs more meaningful – and this has helped make them much more reliable.
We will be addressing this further in the next couple of weeks – keep tuned!
by MichaelF on October 26, 2006 05:13pm
On September 6, 2006 Microsoft and Cisco announced the details of a technical partnership announced in October of 2004 focused on providing interoperability between the companies' disparate network security technologies: NAC and NAP. In this interview Sam digs into the details with Mark Ashida, General Manager of the Enterprise Networking Group. They also discuss Xorp and why Mark believes his is one of the most "open" groups at Microsoft.
As part of the announcement a whitepaper with details was produced and can be found here.