Follow Us on Twitter
by Sam Ramji on August 30, 2007 05:57pm
On the day before OSCON officially kicked off I was heading back from the Oregon Convention Center to downtown and ended up standing on the MAX next to someone who caught my attention for two reasons. First, he was the first person to tell me about the Tim O'Reilly/Eben Moglen conversation from earlier in the morning which I had missed. Second he had a cool flash memory microphone that he used to record podcasts on the fly. Turns out it was Baron George (http://www.blogs.sun.com/barton808/), Group Manager, Free and Open Source Software from Sun.
I had a nice chat with Barton (and we helped each other navigate the heavily under construction Portland downtown) and little did I know he also had a conversation with Sam Ramji that day. The podcast has been posted and you can download it here. Enjoy.
From Barton's Blog:
"Topics: Where the Open Source Software Lab fits within Microsoft; How big is Sam's group; When software technologies compete, you win; What reaction does he get when he turns up at FOSS events; Debating Eben Moglen at OSBC -- no one wants patent Armageddon; Is there a Wubuntu in the works?."
by Bryan Kirschner on August 29, 2007 01:09am
When I describe my job as “helping Microsoft and open source to grow together,” I get a broad range of reactions from people outside and inside of Microsoft. These reactions have included sentiments along the lines of “that must be tough,” or “you must be a glutton for punishment” on occasion.
After wrapping up a fairly momentous year* culminating in OSCON (see this and this), I thought the time was right to put some big-picture context around how I feel about my job.
The year 1995 was when we saw the first official public release (0.6.2) of the Apache server, and MySQL AB was founded.
The world was two years shy of the Debian Software Guidelines and three years away from the articulation of the Open Source Definition (OSD) they inspired.
The Open Source Technology Group (OSTG)—by virtue of operating both Sourceforge and Freshmeat today’s largest hoster of public open source project--was about to be founded.
We were at the very beginning of the growth of open source into a significant, enduring part of the IT environment.
So what’s this graph below showing over the course of (roughly) 1995 through 2007?
It’s showing Microsoft’s reported fiscal year revenue, which grew to $51.122B USD in 2007 from $6.075B in 1995 (you can reproduce it with data from here.)
During most of this time, we didn’t have Codeplex. We didn’t have licenses submitted to the OSI. We didn’t have Port 25. We didn’t have Bill Hilf, or Sam Ramji, or the rest of the OSS lab. And we didn’t have http://microsoft.com/opensource.
And Microsoft and open source did grow, together—coincidentally. In retrospect, this is not surprising. Microsoft technologies supported an ecosystem of passionate developers and an entrepreneurial individuals and companies and tens of millions of end-user programmers and end-users providing peer-to-peer assistance sharing knowledge—and code—with each other.
And we had many people at Microsoft working on (to highlight some of my current favorites) the research and development and product management path to technologies like Silverlight and XNA and Photosynth.
Now we have all those things—plus the opportunity to think every day about the “growing together” that has happened coincidentally from (say) 1995 until July 2007—and how we might work together with others to make it that much more (--food for thought: MySQL’s Community VP Kaj Arno blogged about the WAMP stack just after OSCON here).
There are reasons why my job can be challenging sometimes—but the slightest concern that Microsoft and open source don’t have opportunities to “grow together” by design faster and farther than they have (largely)** “by accident” over the last 10 plus years isn’t among them.
The “official” t-shirt of the Open Source Software Lab at Microsoft says “Open Source Software Lab at Microsoft: Reports of Snowballs Seen in Hell.” This year was another step forward to replacing that slogan with “Open Source Software Lab at Microsoft: Of Course.” Then I’ll get the answer I give back to people when I describe my job: not tough. Cool.
*I have internalized a July-to-June fiscal year calendar. I attribute this to the fact that my wife works in education, so summer forms an annual breakpoint for her, as well as to the fact I worked in Finance during a point in my life when I think I mistook a love of math for an affinity for pain.
**There’s more than enough material for, and reason to do, a separate post about some of the individual “pioneers” at Microsoft, without whom we would not have the resources we have in place today here at Microsoft.
by jcannon on August 24, 2007 03:58pm
Heading into a long (and quiet) weekend for many folks in the US, I thought I would highlight some useful open source and shared source projects on Codeplex, particularly after reading eWeek's gallery of the Top 25 Most Active Open Source Projects on Codeplex. There's some practical tools in the article - including an open source blogging engine, SQL sample applications and a cool mapping application. What many folks may not know is that there are over 2,000 projects available on Codeplex. Check out, among others:
IronRuby also released last month (it lives on RubyForge) - John Lam has more details here. If you're interested in learning more, I would also recommend checking out our Sourceforge landing page that highlights open source projects and programs at Microsoft. While you're on Sourceforge, we have some interesting projects over there as well - including the WiX Toolset and the OpenXML/ODF Translator.
Enjoy the weekend (and the downloads). -Jamie
by hanrahat on August 23, 2007 04:02pm
I’ve been a regular attendee of the O’Reilly Open Source Conference in Portland and Linux World Expo – San Francisco for several years, but this is the first time I represented Microsoft at them. Between the two conferences, I met a lot of people with whom I’ve worked for many years. I appreciate the encouraging words I received from many of them and I respect the concerns others expressed regarding my decision to join Microsoft. A lot of our conversations were about what I thought I could accomplish by making the change.
One observation I’ve made while working with companies involved in open source is that every one of them wrestles with the balance of working within the community for the better good and reserving value for their own need to compete successfully for business. There are few, if any, companies that are purely open-source directed. There are also few that are purely proprietary. Microsoft is in the spectrum of balance between proprietary and open source just like everyone else.
Clearly Microsoft’s balance tends toward the proprietary, but we demonstrated at both conferences that we take participation as a member of the open source community seriously and announced several significant actions. One of these announcements was that Microsoft is submitting both its permissive (MSPL) and community (MSCL) licenses to OSI for certification. Another was John Lam’s announcement of release of Iron Ruby and Iron Python as open source projects and that these are both open to community contributions. Both of these efforts reflect serious attempts by Microsoft to participate in the development of truly open source software.
What’s also interesting is that the role of individual developers is changing, too. In his presentation at OSCON, “Current State of the Linux Kernel,” Greg Kroah-Hartman made the point that the largest group of contributors to the kernel is composed of “Unknown Individuals” who have no affiliation to a company with respect to their contributions. Roughly 18% of contributions come from this group, and 13% come from another group called “Amateurs.” But, a member of the audience pointed out that this means the work of nearly 70% of contributors is being sponsored by industry. Of those 70% few are employed to be purely open-source contributors; most have responsibilities to their individual companies to ensure that some value is retained for their own business purposes.
We’re all finding our balance, companies and individuals alike, and that balance is rarely stationary. It frequently changes as we assess our roles in the software development industry. One of the things I want to accomplish is to find ways that Microsoft can adopt open source methodologies and can contribute to the greater good. Two areas I will concentrate on for now are interoperability, through the work we’re beginning with Novell in the areas of virtualization and web services management, and engagement with the SAMBA community to help ensure the quality of interaction between SAMBA and Microsoft products. I hope to attend the CIFS Workshop at Google next month to see where Microsoft can work with the SAMBA community beyond our current level of sharing bug and test data.
One of the first activities I engaged in when I joined Microsoft was to help draft the mission of Microsoft’s Open Source Software Lab. Here in a nutshell is what I hope to accomplish at Microsoft.
Produce mutual respect and understanding between Microsoft and the Open Source community such that both act responsibly together for the sake of better software and human potential & inclusion.
Produce mutual respect and understanding between Microsoft and the Open Source community such that both act responsibly together for the sake of better software and human potential & inclusion.
I invite those of you I have worked with over the years and all of you I spoke with at OSCON and LWE to make this our common goal and to join me in the effort.
by anandeep on August 17, 2007 01:35pm
My overall impression was that OSCON was lower key than last year. There seemed to be fewer booths in the Exhibition floor and less palpable excitement in the venue. A lot of people were complaining about the quality of the tutorials and the talks. Or it may just be that this was my second time around attending OSCON and it didn’t have the same quality of excitement for me compared to the very first time!
I attended two keynotes (that I remembered an hour after I attended them) – one involved Eben Moglen, the lawyer dude for the FSF, tearing into Tim O’Reilly. Tim O’Reilly was asking Eben questions about whether GPL V3 gave Google a free ride. Eben went into how he wanted to protect freedom and how the Open Source people had “wasted ten years” not making “freedom” the issue. But as persuasive and articulate as Eben is, I think he left the feeling with the audience that the FSF had given in to Google to get GPL V3 passed. Eben even used the words “diplomacy” to describe the process of building GPL V3.
Some dramatic moments such as Eben pointing to Tim and then to the sign behind him and saying,” Take down that sign with YOUR name on it and put “Freedom” there instead”. Tim even went to say to him – “I will ignore the personal attacks”. To which Eben said – “This is not a personal attack, it’s an invitation to diplomacy”. Wow! I could have watched a musical and not had so much drama.
Don’t get me wrong, I admire the FSF’s devotion to its cause. They have been consistently practicing what they preach. I am glad that there are people like them to keep even big companies like Microsoft honest. But I feel let down with their inconsistency with respect to Google.
The other keynote that I enjoyed was Bill Hilf. Bill is our GM, and we know all the stuff he said in speech. We try not to say one thing and practice another (surprise, surprise!). But he said it all so clearly, and so well in front of a largely skeptical audience. It was a masterful and engaging performance. Even when ambushed by Nat Torkington with questions that were not on the agenda, he didn’t lose his verve and kept on emphasizing what we do instead of what we say. It feels great to have a official place in the Microsoft firmament @ www.microsoft.com/opensource . Wait! Or was it nice to be the exclusive open source guys in such a big company? (From a purely selfish point of view)
I attended a few talks. One was Sam Ramji’s talk about our Interop efforts in virtualization, identity and management. To tell you the truth, I went because Sam is my boss. But I stayed because he did a great job of simplifying and presenting the information. I learnt and reinforced a ton about virtualization and the Interop challenges around it. I now have a firm grasp on the subject – not isolated chunks of information unconnected to each other. Ok Sam, when are you doing a talk on High Performance Computing? J
The other significant talk I attended was about Hadoop. Hadoop is an open source software platform that lets one easily write and run applications that process vast amounts of data. Or basically it implements Google’s MapReduce. According to Google’s original paper on MapReduce - “Programs written in the MapReduce style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system”.
Hadoop has been created by Doug Cutting, the creator of Lucene and Nutch. In order for him to do MapReduce effectively he had to do a “Google File System (GFS)” like system called “Hadoop Distributed File System (HDFS)”. HDFS was originally built as infrastructure for the Apache Nutch web search engine project.
Now, Yahoo is using Hadoop and HDFS for its back end. There is now an open source implementation of Google’s Open Source based proprietary stuff. If the community get’s behind it, it may be that the truly open source stuff may outshine the open source but proprietary stuff. Makes your head spin.
Oh, and why the name Hadoop? Doug Cutting’s son’s favorite elephant was named Hadoop. A name that came from the son’s imagination. I love Open Source!
by MichaelF on August 13, 2007 03:10pm
I had the opportunity to present at both OSCON in Portland and at LinuxWorld in San Francisco in the last three weeks – both O’Reilly and IDG were gracious enough to grant me a session on the work that Microsoft is doing with Novell, XenSource, and others on Linux and Windows interoperability.
Overall our focus is on three critical technology areas for the next-generation datacenter: virtualization, systems management, and identity. Identity in particular spans enterprise datacenters and web user experiences, so it’s critical that everyone shares a strong commitment to cross-platform cooperation.
Here are the slides as I presented them, with some words about each to give context, but few enough to make this post readable overall. I skipped the intro slides about the Open Source Software Lab since most Port 25 readers know who we are and what we do.
The market for heterogeneous solutions is growing rapidly. One visible sign of this is virtualization, an “indicator technology,” which by its nature promotes heterogeneity. Virtualization has become one of the most important trends in the computing industry today. According to leading analysts, enterprise spending on virtualization will reach $15B worldwide by 2009, at which point more than 50% of all servers sold will include virtualization-enabled processors. Most of this investment will manifest itself on production servers running business critical workloads.
Given the ever improving x86 economics, companies are continuing to migrate off UNIX and specialty hardware down to Windows and Linux on commodity processors.
So, why now?
First, customers are insisting on support for interoperable, heterogeneous solutions. At Microsoft, we run a customer-led product business. One year ago, we established our Interoperability Executive Customer Council, a group of Global CIOs from 30 top global companies and governments – from Goldman Sachs to Aetna to NATO to the UN. On the Microsoft side, this council is run by Bob Muglia, the senior vice president of our server software and developer tools division. The purpose of this is to get consistent input on where customers need us to improve interoperability between our platforms and others – like Linux, Eclipse, and Java. They gave us clear direction: “we are picking both Windows and Linux for our datacenters, and will continue to do so. We need you to make them work better together.”
Second, MS and Novell have established a technical collaboration agreement that allows us to combine our engineering resources to address specific interoperability issues.
As part of this broader interoperability collaboration, Microsoft and Novell technical experts are architecting and testing cross-platform virtualization for Linux and Windows and developing the tools and infrastructure necessary to manage and secure these heterogeneous environments.
I am often asked, “Why is the agreement so long?” as well as “Why is the agreement so short?” The Novell-Microsoft TCA is 5 years mutual commitment. To put this in context, 5 years from now (2012) is two full releases of Windows Server and 20 Linux kernel updates (given the 2.5 month cycle we’ve seen for the last few years). This is an eternity in technology. What’s important to me is that it’s a multi-product commitment to building and improving interoperability between the flagship products of two major technology companies. This means we can build the practices to sustain great interoperable software over the long term as our industry and customer needs continue to evolve.
This talk covers two major components of the future of Linux and Windows interoperability: Virtualization and Web Services protocols.
On the Metal focuses on the virtualization interoperability work being done between Windows Server 2008 and Windows Server virtualization, and SUSE Linux Enterprise Server and Xen.
On the Wire covers the details and challenges of implementing standards specifications, such as WS-Federation and WS-Management; and how protocol interoperability will enable effective and secure virtualization deployment and management.
These are the key components required for the next-generation datacenter. We know the datacenters of today are mixtures of Windows, Linux, and Unix, x86, x64 and RISC architectures, and a range of storage and networking gear. Virtualization is required to enable server consolidation and dynamic IT; it must be cross-platform. Once applications from multiple platforms are running on a single server, they need to be managed – ideally from a single console. Finally, they must still meet the demands of security and auditability, so regardless of OS they must be accessible by the right users at the right levels of privilege. Hence, cross-platform virtualization demands cross-platform management and identity.
In non-virtualized environments, a single operating system is in direct control of the hardware. In a virtualized environment a Virtual Machine Monitor manages one or more guest operating systems that are in “virtual” control of the hardware, each independent of the other.
A hypervisor is a special implementation of a Virtual Machine Monitor. It is software that provides a level of abstraction between a system’s hardware and one or more operating systems running on the platform.
Virtualization optimizations enable better performance by taking advantage of “knowing” when an OS is a host running on HW or a guest running on a virtual machine.
Paravirtualization , as it applies to Xen and Linux, is an open API between a hypervisor and Linux and a set of optimizations that together, in keeping with the open source philosophy, encourage development of open-source hypervisors and device drivers.
Enlightenment is an API and a set of optimizations designed specifically to enhance the performance of Windows Server in a Windows virtualized environment.
Hardware manfuacturers are interested in virtualization as well. Intel and AMD have independently developed virtualization extensions to the x86 architecture. They are not directly compatible with each other, but serve largely the same functions. Either will allow a hypervisor to run an unmodified guest operating system without incurring significant performance penalties.
Intel's virtualization extension for 32-bit and 64-bit x86 architecture is named IVT (short for Intel Virtualization Technology). The 32-bit or IA-32 IVT extensions are referred to as VT-x. Intel has also published specifications for IVT for the IA-64 (Itanium) processors which are referred to as VT-i; .
AMD's virtualization extensions to the 64-bit x86 architecture is named AMD Virtualization, abbreviated AMD-V.
There are three Virtual Machine Monitor models.
A type 2 Virtual Machine Monitor runs within a host operating system. It operates at a level above the host OS and all guest environments operate at a level above that. Examples of these guest environments include the Java Virtual Machine and Microsoft’s Common Language Runtime, which runs as part of the .NET environment and is a “managed execution environment” that allows object-oriented classes to be shared among applications.
The hybrid model, shown in the middle of the diagram has been used to implement Virtual PC, Virtual Server and VMWare GSX. These rely on a host operating system that shares control of the hardware with the virtual machine monitor.
A type 1 Virtual Machine Monitor employs a hypervisor to control the hardware with all operating systems run at a level above it. Windows Server virtualization (WSv) and Xen are examples of type 1 hypervisor implementations.
Development of Xen and the Linux hypervisor API paravirt_ops began prior to release of Intel and AMD’s virtualized hardware and were designed, in part, to solve the problems inherent in running a virtualized environment on non-virtualization-assisted hardware. They continue to support both virtualization-assisted and non-virtualization-assisted hardware. These approaches are distinct from KVM, or the Kernel-based Virtual Machine, supports only virtualization-assisted hardware; this approach uses the Linux kernel as the hypervisor and QEMU to set up virtual environments for Linux guest OS partitions.
In keeping with the open source community’s philosophy of encouraging development of open source code, the paravirt_ops API is designed to support open-source hypervisors. Earlier this year VMware’s VMI was added to the kernel as was Xen. Paravirt_ops is in effect a function table that enables different hypervisors – Xen, VMware, WSv – to provide implementation of a standard hypercall interface, including a default set of functions that write to the hardware normally.
Windows Server 2008 enlightenments have been designed to allow WS 2008 to run in either a virtualized or non-virtualized environment *unmodified*. WS 2008 recognizes when it is running as a guest on top of WSv and dynamically applies the enlightenment optimizations in such instances.
In addition to a hypercall interface and a synthethic device model, memory management and the WS 2008 scheduler are designed with optimizations for when the OS runs as a virtual machine.
The WSv architecture is designed so that a parent partition provides services to the child partitions that run as guests in the virtual environment. From left to right:
Native WSv Components:
Like the WSv architecture, the Xen architecture is designed so that a special partition, in this case Dom 0, provides services to guest partitions that run in a virtual environment.
Native Xen Components:
The slide says it all… I couldn’t figure out a way to put this one in a graphic. ;)
Virtualization interoperability testing is very challenging. While the architecture may look similar at a high level, the devil is in the details – down at the API and ABI level, the technologies are quite different.
From a personnel standpoint, the expertise required to debug OS kernels is hard to find, let alone software engineers with these skills who are focused on writing test code. Microsoft has established a role known as “Software Design Engineer in Test” or “SDE/T” which describes the combination of skills and attitude required to test large-scale complex software rigorously through automated white-box test development.
The problem of testing Linux and Windows OSes across WSv and Xen requires these kernel-level skills, but on both operating systems. It’s a non-trivial challenge.
Next is the technical issue of the test matrix:
To put this in context, we need a minimum of 40 server chassis to test this matrix – for each operating system.
On top of this, the software components that must be tested include:
Since Windows and Linux are general-purpose operating systems, these components must be tested across a range of workloads which will guarantee consistent, high-performance operation regardless of usage (file serving, web serving, compute-intensive operations, networking, etc.).
Finally – and no less a challenge than the skills and technology aspects – is that of building a shared culture between two very different and mature engineering culture. What is the definition of a “Severity 1” or “Priority 1” designation for a defect? How do these defects compete for the core product engineering teams’ attention? How are defects tracked, escalated, processed, and closed across two different test organizations’ software tools? Most importantly, what is the quality of the professional relationships between engineers and engineering management of the two organizations? These are the critical issues to make the work happen at high quality and with consistency over the long term.
WS-Management is an industry standard protocol managed by the DMTF (Distributed Management Taskforce), whose working group members include HP, IBM, Sun, BEA, CA, Intel, and Microsoft among others. The purpose is to bring a unified cross-platform management backplane to the industry, enabling customers to implement heterogeneous datacenters without having separate management systems for each platform.
All Microsoft server products ship with extensive instrumentation, known as WMI. A great way to see the breadth of this management surface is to download Hyperic (an open source management tool) and attach it to a Windows server – all of the different events and instrumentation will show up in the interface, typically several screen pages long.
It is not surprising that the management tools vendors are collaborating on this work – and it’s essential to have not just hardware, OS, and management providers but application layer vendors like BEA as well – but to me the most important aspect of the work is the open source interoperability.
In the Microsoft-Novell Joint Interoperability Lab, we are testing the Microsoft implementation of WS-Management (WinRM) against the openwsman and wiseman open source stacks. This matters because the availability of proven, interoperable open source implementations will make it relatively easy for all types of providers of both management software and managed endpoints to adopt a technology that works together with existing systems out of the box. Regardless of development or licensing model, commercial and community software will be able to connect and be well-managed in customer environments.
So what does this all mean? We’ll see end-to-end interoperability, where any compliant console can manage any conforming infrastructure – and since the specification and the code are open, the barriers to entry are very low. It’s important that this capability extends to virtualized environments (which is non-trivial) so that customers can get the full potential of the benefits of virtualization – not just reducing servers at the cost of increased management effort.
Sometimes people challenge me with the statement “if you would just build software to the specification, you wouldn’t need to all this interoperability engineering!” This is in fact a mistaken understanding of interoperability engineering. Once you’ve read through a specification – tens to hundreds of pages of technical detail – and written an implementation that matches the specification, then the real work begins. Real-world interoperability is not about matching what’s on paper, but what’s on the wire. This is why it’s essential to have dedicated engineering, comprehensive automated testing, and multiple products and projects working together. A good example of this is the engineering process for Microsoft’s Web Services stack. The specifications (all 36 of them) are open, and licensed under the OSP (Open Specification Promise). In the engineering process, Microsoft tests the Windows Web Services implementation against the IBM and the Apache Axis implementations according to the WS-I Basic profile. A successful pass against all these tests is “ship criteria” for Microsoft, meaning we won’t ship our implementation unless it passes.
In the messy world of systems management, where multiple generations of technologies at a wide range of ontological levels (devices, motherboards, networking gear, operating systems, databases, middleware, applications, event aggregators, and so on) testing is complex. Adding virtualization into this mix adds another layer of complexity, necessitating methodical and disciplined testing.
Open ID is a distributed single sign-on system, primarily for websites. It’s supported by a range of technology providers including AOL, LiveJournal, and Microsoft.
WS-Federation is the identity federation web services standard which allows different identity providers to work together to exchange or negotiate information about user identity. It is layered on top of other Web Services specifications including WS-Trust, WS-Security, and WS-SecurityPolicy – many of which are lacking an open source implementation today.
ADFS is Active Directory Federation Services, a mechanism for identity federation built into Microsoft Active Directory.
Cardspace is an identity metasystem, used to secure user information and unify identity management across any internet site.
Project Higgins is an Eclipse project intended to develop open source implementations of the WS-Federation protocol stack as well as other identity technologies including OpenID and SAML.
Samba is a Linux/Unix implementation of Microsoft’s SMB/CIFS protocols for file sharing and access control information. It is widely deployed in Linux-based appliances and devices, and ships in every popular distribution of Linux as well as with Apple’s OS X.
This work is still in early phases, and you can expect more details here in the future. Mike Milinkovich of Eclipse has been a champion for improving the interoperability of Eclipse and Microsoft technologies, especially Higgins. Separately the Bandit Project has made significant progress in building technologies which support CardSpace. I appreciate the work of these teams and look forward to more progress here.
The slide says it all here. We’re committed to long term development and delivery of customer-grade interoperability solutions for Windows and Linux, and we’ll do it in a transparent manner. Tom Hanrahan, the Director of the Microsoft-Novell Joint Interoperability Lab, brings many years of experience in running projects where the open source community is a primary participant. I and my colleagues at Microsoft are excited to learn from him as he puts his experiences at the OSDL/Linux Foundation and at IBM’s Linux Technology Center into practice guiding the work of the lab.
You can expect regular updates from us on the progress and plans for our technical work, and I expect you to hold me and Tom accountable for this promise.
I hope you found the presentation valuable. I felt it was important to get this material out broadly since it will impact many people and essential to be clear about what we are building together with Novell, XenSource, and the open source community.
by jonrosenberg on August 10, 2007 11:45am
Just a brief update to my OSCON blog post. Today we started the ball rolling on the submission of our Shared Source licenses to the OSI approval process. We are submitting two licenses, the Microsoft Permissive License (MS-PL) and Microsoft Community License (MS-CL). Thank you to both Russ Nelson and Michael Tiemann for the guidance and informed opinions as we worked through this process.
The first step in the submission process was to post the licenses in HTML format on a web site. We’ve done that and you can see them here. We’ve also provided the license approval committee with our analysis of how these new submissions contribute to the body of OSI approved licenses. In addition we’ve sent an e-mail to the license-discuss alias, describing the submission.
We look forward to some lively discussion on license-discuss over the next week. After that, I personally look forward to two weeks of vacation, during which time any activity involving a computer will be considered by my family to be a serious infringement of vacation terms. I will be picking up the discussion thread again after Labor Day and look forward to continuing the journey.
by kishi on August 07, 2007 01:57pm
Level-Set – Log Management: This section includes open-source technology directed primarily on host-based logging, log file rotation and log file analysis. Many of these tools are very common free and open-source software tools that are distributed and preconfigured with most of the major Linux systems, including major vendors such as RedHat and Novell.
Logrotate is a very popular application utilized in a number of Linux systems, including all RedHat and SUSE based systems. The logrotate utility typically runs periodically via cron, a task scheduling application. The utility will read a configuration file (/etc/logrotate.conf), and archive and compress log files according to the configuration. Administrators can configure when log files should be rotated based on age and size, and how long backlogs should be maintained. Older archived log files can then be swapped out and replaced with newer archives.
II. Syslogd and klogd
Typical Linux systems utilize a syslog daemon to capture log messages from userspace applications and write them to text-based log files or send them to a logging host over the network. The syslogd daemon is often accompanied by a klogd application which is designed to capture and log kernel messages.
The behavior of the syslog daemon can be configured via the /etc/syslog.conf configuration file. All messages captured by syslog are categorized by facility and priority. Messages can then be sent to particular log files or logging hosts, or dropped completely based on their facility and priority attributes.
- local0 through local7
- warning or warn
- err or error
- emerg or panic
List of syslog facilities and priorities.
List of syslog facilities and priorities.
The syslog-ng application aims to be an enhanced drop-in replacement for the traditional syslog daemon. It provides many of the same features of the standard syslog daemon, but includes additional features such as advanced message filtering based on content, remote logging via UDP or TCP, and the ability to write log files to a database such as MySQL or PostgreSQL. More recent SUSE-based systems such as SLES10 have switched to syslog-ng as the default syslog server.
IV. Viewing Logs
Most log files on a Linux system are stored in plain-text, which means they can be viewed and parsed using a number of different command-line tools. Typical utilities such as tail, head, grep, cat, less, more, sed and awk can be used to view and filter log messages via the command line.
There are also a myriad of utilities designed to parse and view log files via a GUI or web browser. Some utilities are even designed to handle specific log formats, such as those generated by Linux’s Netfilter firewall subsystem.
GNOME System Log Viewer
The GNOME system includes a GTK-based system log viewing application that displays system logs via the GUI.
YaST System Log Module
SUSE-based systems using YaST typically include a module called View System Log (called internally as view_anymsg). Similar to the GNOME System Log viewer, the YaST module allows an administrator to view many of the various system logs without using the command-line.
V. Log Analysis
The logwatch utility is designed to parse system logs and located any entries that might indicate security threat or system failure and send an email report to a designated address. Logwatch is distributed with RedHat Enterprise Linux systems. The following is an excerpt from the RPM description:
“LogWatch is a customizable log analysis system. LogWatch parses through your system's logs for a given period of time and creates a report analyzing areas that you specify, in as much detail as you require. LogWatch is easy to use and claims that it will work right out of the package on almost all systems. Note that LogWatch now analyzes Samba logs.”
LogWatch is typically executed periodically via cron, a task scheduling application.
The logcheck utility is a part of the Sentry Tools project that also includes portsentry, a utility designed to detect port scans. Similar to the LogWatch utility, the software is designed to parse system log files, find log entries that may indicate security problems and send an email to a preconfigured address. Also similar to the LogWatch utility, logcheck relies on the standard cron utility to be periodically executed.
That does it for Log Management and Analysis section. We have one last blog to go and certainly hope that you found the information we have captured for you useful. If you’re running any special toolsets or customizable scripts for log management and analysis and would like to share your experience with us, please send us your feedback and as always, THANK YOU for tuning into Port25.
by Sam Ramji on August 03, 2007 02:51pm
Back in the Spring, Sam Ramji attended an Olliance event entitled, the Open Source Think Tank. It's a smaller gathering, and well attended by nearly 100 or so executives and influential developer-users of open source software. During one of the sessions, Sam and Justin Steinman took an impromptu moment to answer some tough questions regarding the nature of the Microsoft-Novell partnership. Justin Steinman is Novell's Director of Marketing for Linux and Open Platforms.
Many of the questions had been asked before and in fact have been posited more than once on Port 25. We thought these discussions would be interesting to the community at large - ...so Sam & Justin hopped on the phone recently to answer them in podcast format. Take a listen....I try to emcee - but these are tough guys to keep on one topic :) As always, we welcome feedback and we'll invite Sam & Justin to answer the comments. If you want more information on the Novell partnership - you may want to check out moreinterop.com - home to most announcements, events and information related to the partnership.
by Paula Bach on August 01, 2007 03:35pm
In my last blog I talked about interdisciplinarity and multidisciplinarity and a little bit about my research this summer. In part 1 and 2 of this blog I am going to talk more about the research I have been doing here at Microsoft. Over the last few months I have been looking at a phenomenon called usability expertise. Anybody who has had difficulty using a product has some experience with usability expertise. Usability expertise is knowledge about how to design an artifact to ensure users experience product effectiveness, efficiency, and satisfaction in a specified context of use. Even if people are not experts in Human Computer Interaction (HCI), they can experience a lack of usability expertise in the design of the product.
HCI experts are actually quite rare because the field is young and underdeveloped. The field of HCI is newer than computer science. HCI grew out of computer science about fifteen years after the software engineering crisis in the sixties and although Human Factors is about fifty years old, it has not necessarily been linked to software engineering like HCI has. Software development has included a user interface role to design and develop the human-computer interface, and although some companies still employ user interface developers, HCI experts include UI designers, Usability Engineers, User Experience Researchers and Designers, and Interaction Designers—roles that go beyond the interface and include field research, visual design, and lab studies, for example.
Although the obvious place to look for usability expertise is in the knowledge of HCI experts, I am interested in what role this expertise plays in software development. Just having HCI experts available is not enough to ensure good usability. I want to know who has usability expertise, how it is communicated among project members, and how it is used to make decisions. To find these things out the research looks at both proprietary and open source software development settings. What I am reporting here is an overview, or summary, of preliminary findings. I am still analyzing the data and will publish “official results” in the next year and a half while I work on and finish my dissertation. The research seeks to understand the role of usability expertise in software development and takes that understanding to inform the design of a feature or tool on CodePlex that will support usability expertise for projects interested in making sure their software is usable by their intended user base.
Usability expertise in the context of design is related to design rationale, or more specifically usability design rationale. Design rationale is the "the capture, representation, and use of reasons, justification, notation, methods, documents, and explanations involved in the design of an artifact" (from the book Design Rationale by Moran and Carroll). Since design rationale is a well defined concept that has many details, its presence in real design discussions may be fragmented. This fragmentation might be better understood as usability expertise. So a rough definition of usability expertise might be the “stuff” needed to talk about and make decisions about usability during software development. The “stuff” could be the elements in design rationale or something people have not talked before. In this sense my discoveries made while investigating the role of usability expertise could be groundbreaking or they could be well known in the software development communities. Either way reporting the findings of the role of usability expertise should be interesting. In fact, several people, both at Microsoft and the open source communities I surveyed have already stated that they would like to see the findings, so this is encouraging.
I am collecting data in a number of ways: surveys, interviews, and observations. I surveyed people at Microsoft who are part of the software development process of a project, namely usability experience researchers and designers, developers, and program managers. In the open source world I posted the survey to major projects who met criteria for overtly caring about usability, namely that they had a usability list and at least one person listed as a usability expert. The Microsoft usability expertise survey is still collecting responses, and although I am still working with the data on the open source survey, I can mention a few things.
In the open source survey, fatigue affected about half of the 125 respondents with 56 making it to the last question. The survey had two open ended questions asking about the importance and challenges of usability in open source. Usually open ended questions are best saved for the end after other more important questions are answered. The tradeoff was that the open ended questions were important and that the survey could have biased the open ended responses if they were at the end because the survey included questions that asked about specifics with the importance of and challenges with usability.
Data clustered around categories of ease of use, simplicity, and consistency for usability importance, with each category claiming about a quarter of the responses. About 10% of the respondents stated that issues related to system performance were important for usability. Usability challenges included about a quarter of the respondents reporting that challenges with usability in open source software development were developer based. This included not valuing usability, not having usability expertise other than self-referential (based on own experience), and communication problems related to common ground. Common ground is when two people reach a mutual understanding such that one person knows that the other person knows that the first person knows. Common ground is more difficult to reach in computer-mediated environments than in face-to-face environments because not as many channels exist to help with understanding—in face-to-face you can use people’s expressions and gestures to help you understand what they are saying. Other categories included lack of resources and lack of process (both at about 10% of the responses). Other questions I am asking the data include the following:
1. Who has usability expertise? 2. How is usability expertise communicated? 3. How is usability expertise used to make decisions? 4. Who cares about usability expertise? 5. How available is usability expertise?
The data may not be able to answer the above questions in full, but it will get me closer to asking different questions that may be more relevant to the data. I am conducting interviews which may also be able to address the questions and get at depth surveys cannot.
I have been scheduling and conducting interviews with Microsoft people and will report on those preliminary findings in the next blog. I will conduct the open source interviews via video conference when I get back to Penn State. The open source usability people I am going to talk to are all over the world: US, Canada, Germany, Australia, and France.
I have also been observing three open source projects looking at email lists and other interesting things like conversations in the bug tracker, how a usability issue is handled in the bug tracker, and reading UI specifications. I chose three ‘big’ open source projects that attend to usability. I wanted diversity in the projects and a wide user base. I spent 8 weeks observing the workings of usability in Firefox, KDE, and OpenOffice.org. The discussions on the email lists vary considerably. Some are short and polite with a developer inquiring about the usability of a particular design change or feature he is thinking about. Others are heated and get users, developers and usability people involved trying to hash out the merits of a feature.
The most often used design rationale, or type of usability expertise, is self-referential. The people on the lists, and in the beginning mostly users or user/developers respond to the feature proposal, speculate about the usability of the change based on their own experience. Since most of the users on the lists are advanced or power users, this might not be representative of the main user base, at least for the three projects I was studying. I don’t know if they have any data about the user base, but it may be that the email lists are only one input to the decision making about usability of those projects. Despite the openness of the discussion list and other aspects of the development, there are other decisions that are made ‘behind the scenes’. Possibly, the ‘behind the scenes’ usability expertise that contributes to decision making about which usability fixes to include in the next release is similar to how proprietary usability expertise is used in decision making. This is something I will consider when investigating the role of usability expertise in both environments.