Follow Us on Twitter
The TypeScript language is made available under the Open Web Foundation’s OWFa 1.0 Specification Agreement, and the community is invited to discuss the language specification. Microsoft’s implementation of the compiler is also available on CodePlex under the Apache 2.0 license. There you can view the roadmap, and over the next few weeks and months you’ll see the TypeScript team continue to develop on CodePlex in the open.
TypeScript is one foray into making programming languages and tooling even more productive. Pick it up, take it for a spin, and give your feedback. You can contribute by discussing the language specification or filing a bug.
Olivier Bloch Senior Technical Evangelist Microsoft Open Technologies, Inc.
by Claudio Caldato on December 15, 2010 08:00am
As you know, Microsoft is committed to interoperability, and the IE team has previously blogged about and provided developer previews and samples showing "Same Markup" - the same HTML, CSS, and script working across browsers - in action.
Today, as part of the interoperability bridges work we do on this team, we're making available a new Firefox add-on that enables Firefox users on Windows to play H.264-encoded video on HTML5 by using the built-in capabilities found in Windows 7.
Microsoft has already been offering for several years now the extremely popular Windows Media Player plug-in for Firefox, which is downloaded by millions of people a month who want to watch Windows Media content.
This new plug-in, known as the HTML5 Extension for Windows Media Player Firefox Plug-in, is available for download here at no cost. It extends the functionality of the earlier plug-in for Firefox, and enables web pages that that offer video in the H.264 format using standard W3C HTML5 to work in Firefox on Windows. Because H.264 video on the web is so prevalent, this interoperability bridge is important for Firefox users who are Windows customers.
H.264 is a widely-used industry standard, with broad and strong hardware support. This standardization allows users to easily take what they've recorded on a typical consumer video camera, put it on the web, and have it play in a web browser on any operating system or device with H.264 support, such as on a PC with Windows 7.
H.264 is also a very well established and widely supported video compression format, developed for use in high definition systems such as HDTV, Blu-ray and HD DVD as well as low resolution portable devices. It also offers better quality at lower file sizes than both MPEG-2 and MPEG-4 ASP (DivX or XviD).
The HTML5 Extension for Windows Media Player Firefox Plug-in continues to offer our customers value and choice, since those who have Windows 7 and are using Firefox will now be able to watch H.264 content through the plug-in.
Microsoft is already deeply engaged in the HTML5 process with the W3C as we believe that HTML5 will be important in advancing rich, interactive web applications and site design.
Principal Program Manager, Interoperability Strategy Team
In case you missed the great Open Source and related Interop news that came out of Microsoft's annual MIX conference which was held in Las Vegas this week, here's a recap.
Scott Guthrie, a Corporate Vice President in the Microsoft Developer Division, used his MIX keynote to discuss the company's commitment to sponsoring open source projects, such as the Orchard project, a free CMS project in the Outercurve Foundation's ASP.NET Open Source Gallery.
Orchard 1.1 is now available, along with the new UserVoice and DISQUS modules that contribute to the growing number of community-authored extensions for Orchard.
Guthrie also announced an ASP.NET MVC 3 Tools Update, which enables Web developers to innovate quickly and easily via new HTML 5 markup support, Entity Framework 4.1 with Entity Code First now built in for easier database Web solution development, and expanded NuGet capabilities for finding and installing community components.
Guthrie also used his keynote to announce the immediate availability of the Microsoft Silverlight 5 beta, which provides advances in rich media and application development.
Silverlight is a free web-browser plug-in that enables interactive media experiences, rich business applications and immersive mobile apps. It works on all major Operating Systems plus all major browsers, including Firefox, Google Chrome, Safari, and Internet Explorer.
New capabilities in Silverlight 5 include Hardware Video Decode, for enhanced video quality and performance, and "Trickplay," which provides variable-rate video playback with audio pitch correction.
The beta also offers a new Microsoft XNA-based interface for delivering 3-D visualizations within applications, along with a host of new features that are designed to enhance developer productivity and end-user experiences.
In his MIX keynote Dean Hachamovitch, the Corporate Vice President for Internet Explorer, announced the addition to the HTML5 Labs site of a new prototype - FileAPI - as well as announcing plans for the MediaCapture API.
Microsoft launched HTML5 Labs last December as the place where it shares prototypes of early and unstable standards, and committed to regularly update these prototypes and add additional prototypes based on what will most help with the testing of the specifications.
Since then, we have updated the WebSockets prototype three times and we have analyzed a number of specifications, with three new areas currently under active investigation. We have also been working with, and listening to, the feedback from early users, and have updated the HTML5 Labs site and given it a new look and feel.
For more context on all this, read the blog by Walid Abu-Hadba, the Corporate Vice President for Developer Platform & Evangelism, Scott Guthrie, and Soma Somasegar, a Senior Vice President in the Developer Division, about Standards-based web, plug-ins, and Silverlight. In this blog they share their thoughts on the role of plug-ins in general, and Silverlight in particular, in the context of HTML5 and the future of the web.
A new production version of Windows Azure AppFabric Access Control service was also announced at MIX. This enables you to build Single-Sign-On experience into applications by integrating with standards-based identity providers, including enterprise directories such as Active Directory, and consumer-oriented web identities such as Windows Live ID, Google, Yahoo! and Facebook.
The Access Control service enables this experience through commonly used industry standards to facilitate interoperability with other software and services that support the same standards.
I finally had the chance to sit down with Morten at MIX11 in Las Vegas last week to discuss the work he is doing on WordPress with Windows Azure to solve some common challenges with multi-site WordPress installations using traditional hosting.
In Morten's words: "I am building a garden just for me and my clients...I control it...but the security and management of the garden is run by a very large company...they also will make sure that it works!"
Read Morten's blog on http://www.designisphilosophy.com and find him on Twitter @Mor10
Craig Kitterman Twitter: @craigkitterman http://craig.kitterman.net
I am pleased to announce that Microsoft has joined Joyent and Ryan Dahl in their effort to make Windows a supported platform in Node.
Our first goal is to add a high-performance IOCP API to Node to give developers the same high-performance/scalability on Windows that Node is known for, given that the IOCP API performs multiple simultaneous asynchronous input/output operations.
At the end of this initial phase of the project, we will have official binary node.exe releases on nodejs.org, meaning that Node.js will run on Windows Azure, Windows 2008 R2, Windows 2008 and Windows 2003.
You can read more about all this on nodejs.org as well as on Joyeur.com.
While this is just the beginning of the journey to make Node.js on Windows a great platform for Node developers, I’m really excited about making this happen.
So, stay tuned, as there’s a lot more to come!
by hjanssen on July 20, 2009 03:24pm
Well, there is no easy way to say this, so I am simply going to start this blog with the following line.
Microsoft just submitted source code for the Hyper-V Linux Integration Components to the Linux Kernel Community Under GPL v2.
Well, there's a conversation starter! Are you still all sitting in your chairs???
Let me summarize:
Fallen off your chair yet?
Microsoft developed the Linux device drivers to enhance the performance of Linux when virtualized on Windows Server 2008 Hyper-V. My team and I were responsible for testing and validating the driver components that were contributed for this first release.
Now, my team and I will be responsible for further developing this code going forward. (Yes, that does mean that I have gone back to leverage my very early roots as a Kernel programmer. Let the world be warned!!!!). Haiyang Zhang has been working on this code with me, and he will continue to work with me on this going forward.
When I joined Microsoft three years ago, the primary reason was to put my money where my mouth was. You see complaining about something is easy, but it becomes a little more complicated when somebody offers you the opportunity to be part of helping change what you have complained about.
So, three years after taking the job that made me put my money where my mouth was (and still often is!), I for one am EXTREMELY happy to see one of the most significant fruits of our work here in the Microsoft Open Source Technology Center (OSTC). But I have to say, even I would have been hard-pressed to think three years ago that we would consider contributing to the Linux Kernel.
As you know, two years ago Microsoft announced a partnership with Novell, and Tom Hanrahan ran the lab on a day to day basis till about 9 months ago. Since then I have had the pleasure of running the technical side of the execution of that lab under Tom Hanrahan for the OSTC. One of the primary tasks for that lab is to make sure Windows runs well on top of XEN and Linux runs well on top of Hyper-V, and we do this in very close cooperation with Novell.
We do most of this work as an extension to Mike Neil's Hyper-V team.
As part of this, we were asked to help develop and maintain a crucial part of this work called the Linux Integration Components. This code is designed so that Linux can run in an "enlightened mode" on top of Hyper-V (enlightened mode is roughly the Hyper-V equivalent of "paravirtualized mode" for the Xen hypervisor). Without this driver code, Linux can run on top of Windows, but without the same high performance levels. It is this device driver code that we are releasing today, directly to the Linux Kernel.
We're not talking a few hundred lines of code here; we're talking about roughly 20,000 lines of code.
Is this a Dump and Run from Microsoft? Absolutely not! We plan to enhance the functionality of this code, and we will continue to work with the Linux Community to support the drivers and to ensure continued interoperability.
As you can imagine, this was the result of a lot of hard work: Hiyang Zhang, who has been co-writing this code; Hashir Abdi, who has been testing all this stuff; as well as Vijay Tewari and Mike Sterling from the Hyper-V team who have been taking care of the Hyper-V side.
And last, but certainly not least, Greg Kroah-Hartman, who has been helping me to make all this code land in the right area in the kernel. He has patiently worked to help me correct my obvious mistakes and to get the code contributed into the kernel.
So where are we today? Well, Greg Kroah-Hartman will make the code visible to the outside world today. (For those who want to get a head start, the code will sit under <your kernel tree>/drivers/staging/hv). After it becomes visible, I will write a few more blogs this week that should help you to understand, build and run them.
The titles I am thinking for these blogs are:
Where do the Linux ICs reside in the kernel tree and how do I build them?
How do I install, configure and run the Linux IC's?
I had almost forgotten how wrapped up you can be once you start writing code again. So I have not gotten much sleep this past week, but it has been a joy to get back into coding again!
by Gianugo Rabellino on March 08, 2011 09:38am
Recently there were a number of discussions about the wording of the Windows Phone Marketplace Application Provider Agreement, in particular around Excluded Licenses. At that time we clarified that it is possible to publish Open Source applications in the marketplace as long as they are published under a "permissive" license such as Apache or BSD.
I am now happy to announce a development on that front, which underscores that we are not just listening to the community and reviewing the current agreement, but also trying to respond with more clarity. Today the Windows Phone team announced the following:
"We understand the desire for clarification with regard to our policy on applications distributed under open source licenses. The Marketplace Application Provider Agreement (APA) already permits applications under the BSD, MIT, Apache Software License 2.0 and Microsoft Public License. We plan to update the APA shortly to clarify that we also permit applications under the Eclipse Public License, the Mozilla Public License and other, similar licenses, and we continue to explore the possibility of accommodating additional OSS licenses."
by admin on April 19, 2006 05:42pm
Running Command Line Applications on Windows XP/2000 from a Linux Box:
-----Original Message----- From: swagner@******** Sent: Thursday, April 13, 2006 2:35 PM To: Port25 Feedback Subject: (Port25) : You guys should look into _____ Importance: High
Can you recommend anything for running command line applications on a Windows XP/2000 box from within a program that runs on Linux? For example I want a script to run on a Linux server that will connect to a Windows server, on our network, and run certain commands.
One way to do this would be to install an SSH daemon on the Windows machine and run commands via the ssh client on the Linux machine. Simply search the web for information on setting up the Cygwin SSH daemon as a service in Windows (there are docs about this everywhere). You can then run commands with ssh, somewhat like:
ssh administrator@<hostname> 'touch /cygdrive/c/blar'
That will create a file in C:\ called "blar". You can also access Windows commands if you alter the path in the cygwin environment or use the full path to the command:
ssh administrator@<hostname> '/cygdrive/c/windows/system32/net.exe view'
by MichaelF on May 24, 2007 06:15pm
It's happened to me and I'm sure it has happened to you: your software won't load and your data is now trapped inside your PC. The problem may be a hardware or a software failure, and the problem may seem to be irrecoverable. Yet often Linux can be used to help recover data that otherwise might be lost. This paper describes how one can use Linux to recover data from a non-functioning Windows machine.
by admin on June 16, 2006 01:21pm
Infrastructure Management and Strategic Design: Part 2 – Driving Network Efficiencies
Every computing device in existence today lives and breathes on some sort of a Network. Doesn’t matter if your Home PC is connected to a Cable Modem or if you’re office laptop is part of an extended WAN, your device is persistently living and breathing on the Network. From the minute you turn on your device, one of the very first drivers to be loaded are Networking drivers. Everything from DNS, DHCP to the simple sharing of email and office documents depends on the very basic function of the Infrastructure, the Network. So why is it that the same Network which is considered a Category-A asset in an organization is also sometimes at the very root of serious headaches for a Network Administrator. Thoughts about the faith and reliance we put towards Networking Infrastructure is simple staggering. Let’s examine why
If you’re a soccer fan like me, I am sure you have been glued to the World Cup coverage. As I sat with some friends and discussed our loss to the Czech Republic, I realized that in that group, it didn’t matter if you use Firefox or IE to get the game stats, you JUST wanted to know the score. The thought that followed was that maybe “Tools” shouldn’t be as hyped-up as they are sometimes as compared to the stream of “networking” that sustains them. Following that my thoughts turned to the topic of Net Neutrality and how Networks are ideally designed to forward packets regardless of their size, purpose and content. Hmm…this is interesting, I thought. My entire life as an Infrastructure Architect, I tried to come up with creative ways to manage and optimize network / infrastructure performance towards a better outcome for end users.
You see, some of the most perplexing problems for Network Admins is to control and manage “chatty protocols, broadcasts and bandwidth hogs”. How can we do that effectively? Let’s look at this closely: When I read a blog about Web 2.0 or something similar, some of the questions that pop in my head are “Have they considered what effect the implementation would have on their Network Performance” ? How far does the implementation of a new model go towards Application Bandwidth Testing and how inherently and intrinsically dependent we are on Networking.
Networking and the simple availability of bandwidth, wired or wireless, has become as much of an expectation as running water. A few months ago I remember someone saying “Network is like electricity – you don’t call up and thank the power company when you turn on the light, do you, nor do you ask them how much power you can use for your house”. Intriguing thought, nevertheless, for those Layer 2 and 3 warriors, the term “Port Saturation” should definitely ring a bell. Key contributors to reaching port density are instances when there just isn’t any more bandwidth to go around. To avoid choking up the network, I found some of the following tips helpful if you’re managing a small to medium sized network:
Alright, that’s it. I am logging off and heading to the movies to watch the Da Vinci Code. Will be back here next week and Thank You for tuning into Port25.
by jcannon on August 01, 2006 02:51pm
Today, the IT departments offering and managing various IT Services might find themselves in what I would call a “pressure-cooker”. They are faced with a multitude of tasks and added pressure to maintain daily operations while driving efficacy, managing the growing complexity of Service Offerings and most importantly, doing so while keeping pace with the industry best practices. This has been one of the most explosive areas of growth and re-examination for the past few years. Back in my Ops days, I trained under ITIL i.e. IT Infrastructure Library and MOF i.e. Microsoft Operations Fundamentals to get a first hand look at some of the best Service Management practices in the industry. No matter how good I thought our Service Management practices might have been, I could not help but to think in terms of the maturity level of the Services that can be achieved by applying these principles. When you get down to it, you realize that the heart and soul of effective Service Management lies in how mature the offering and support model is. I have learnt a lot from the ITIL Service Management Essentials course, which I attribute to research and practices that have gone into developing these models. I’d like to share w/ you what made sense to me:
I am very eager to hear back from those of you that are an integral part of the Service Management Lifecycle. Please share your experiences, challenges and learning with us.
Kindest Regards and have a great week ahead!
by Peter Galli on December 08, 2008 10:24am
SMB (Server Message Block) is a remote file protocol commonly used by Microsoft Windows clients and servers that dates back to 1980's.
Back when it was first used, LANs speeds were typically 10Mbps or less, WAN use was very limited and there were no Wireless LANs. Network security concerns like preventing man-in-the-middle attacks were non-existent at that time.
Obviously, things have changed a lot since then. SMB did evolve over time, but it did so incrementally and with great care for keeping backward compatibility. It was only with SMB2 in 2007 that we had the first major redesign.
In this blog Jose Barreto, a senior technical evangelist in Microsoft's Storage Solutions Division, explains some of the history behind the protocol and outlines important improvements in SMB2, particularly in regards to reduced complexity, pipelining and compounding.
SMB (Server Message Block) is a remote file protocol commonly used by Microsoft Windows clients and servers that dates back to 1980's. Back when it was first used, LANs speeds were typically 10Mbps or less, WAN use was very limited and there were no Wireless LANs. Network security concerns like preventing man-in-the-middle attacks were non-existent at that time. Obviously, things have changed a lot since then. SMB did evolve over time, but it did so incrementally and with great care for keeping backward compatibility. It was only with SMB2 in 2007 that we had the first major redesign.
A History of SMB and CIFS
When it was first introduced to the public, the remote file protocol was called SMB (Server Message Block). SMB was used, for instance, by Microsoft LAN Manager in 1987 and by Windows for Workgroups in 1992. Later, a draft specification was submitted to the IETF under the name Common Internet File System (CIFS). The CIFS specification is a description of the protocol as it was implemented in 1996 as part of Microsoft Windows NT 4.0. A preliminary draft of the IETF CIFS 1.0 specification was published in 1997. Later, extensions were made to address other scenarios like domains, Kerberos, shadow copy, server to server copy and SMB signing. Windows 2000 (released in 2000) included those extensions. At that time, some people went back to calling the protocol SMB once again. CIFS/SMB has also been implemented on Unix, Linux and many other operating systems (either as part of the OS or as a server suite like Samba). A few times, those communities also extended the CIFS/SMB protocol to address their own specific requirements.
One important limitation of SMB was its "chattiness" and lack of concern for network latency. It would take a series of synchronous round trips to accomplish many of the most common tasks. The protocol was not created with WAN or high-latency networks in mind and there was limited use of compounding (combining multiple commands in a single network packet) or pipelining (sending additional commands before the answer to a previous command arrives). This even led to products created to address the specific issues around SMB WAN acceleration. There were also limitations regarding the number of open files, shares and users. Due to the large number of commands and subcommands, the protocol was also difficult to extend, maintain and secure.
The first major redesign of SMB happened with the release of SMB2 by Microsoft. SMB2 was introduced with Windows Vista in 2007 and updated with the release of Windows Server 2008 and Windows Vista SP1 in 2008.
SMB2 brought a number of improvements, including but not limited to:
It is important to highlight that, to ensure interoperability, SMB2 uses the existing SMB1 connection setup mechanisms, and then advertises that it is capable of a new version of the protocol. Because of that, if the opposite end does not support SMB2, SMB1 will be used.
The SMB2 protocol specification was published publicly by Microsoft and you can find the link at the end of this post.
One of the ways to showcase the reduced complexity in SMB2 is to make a comparison to the commands and subcommands in the old version.
Here is the complete list of the 19 opcodes (or commands) used by SMB2 in the message exchanges between the client and the server, grouped in three categories:
When you try to get a similar list for the old SMB, things get a little more complex. I tried to make a list of all commands and subcommands using only the documents linked below and came up with over 100:
I make no claim that the list above for SMB is exact or complete, but it does make a point. As an interesting exercise, check the lists above to verify that, while SMB2 has a single WRITE operation, there are 14 distinct WRITE operations in the old protocol.
SMB2 also requires TCP as a transport. SMB2 no longer supports NetBIOS over IPX, NetBIOS over UDP or NetBEUI (as SMB version 1 did).
A key improvement in SMB2 is the way it makes it easy for clients to send a number of outstanding requests to a server. This allows the client to build a pipeline of requests instead of waiting for a response before sending the next request. This is especially relevant when using a high latency network.
SMB2 uses a credit based flow control, which allows the server to control a client's behavior. The server will start with a small number of credits and automatically scale up as needed. With this, the protocol can keep more data "in flight" and better utilize the available bandwidth.
This is key to make a large transfer go from hours (in SMB) to minutes (in SMB2) in a "long and fat pipe" (high bandwidth, high latency network).
For an example of how pipelining in SMB2 can improve performance, check out this blog post.
When you look at the command set for the new SMB2 protocol, you notice that they are all simple operations. The old SMB1 protocol had some complex commands and subcommands that combined a set of simple operations as required in specific scenarios.
One of the important changes in SMB2 is the ability to send an arbitrary set of commands in a single request (single network round trip). This is called compounding and it can be use to mimic the old complex operations in SMB1 without the added complexity of a larger command set.
For instance, an old SMB1 RENAME command can be replaced by a single request in SMB2 that combines three commands: CREATE (which can create a new file or open an existing file), SET_INFO and CLOSE. The same can be done for many other complex SMB1 commands and subcommands like LOCK_AND_READ and WRITE_AND_UNLOCK.
This compounding ability in SMB2 is very flexible and the chain of commands can be unrelated (executed separately, potentially in parallel) or related (executed in sequence, with the output of one command available to the next). The responses can also be compounded or sent separately.
This new compounding feature in SMB2 can be used to perform a specific task in less time due to the reduced number of network round trips.
I hope this post has helped you understand some of the important improvements in SMB2, particularly in regards to reduced complexity, pipelining and compounding.
Below is a list of important links that document SMB2, SMB and CIFS, including the latest protocol specifications published by Microsoft:
by Peter Galli on April 22, 2009 01:37pm
Microsoft is sponsoring research at the University of Michigan's Center for Information Technology Integration (CITI) to develop an open source Network File System client for Windows. This will enable Windows to better interoperate with this emerging Internet storage protocol for fast file sharing.
NFS is a commonly used protocol for sharing files among networked computers and storage hardware, particularly with UNIX and Linux-based software. NFSv4 is the latest version of this software and adds support for parallel access to file servers, object-storage, and storage area network infrastructures.
Bob Muglia, the president of Microsoft's Server and Tools Business and a University of Michigan alumnus, expressed excitement about the project, saying that NFSv4.1 is an important standard for accessing parallel file systems in the high-performance computing market, where access to vast amounts of data is critical in areas like scientific or technical computing systems.
"We believe that customers want to be able to choose the technologies that best meet their needs and that also interoperate with existing systems. Ultimately, CITI's work will help change the way customers can combine their systems by enabling computers running Windows to directly and easily access NFS file shares on servers running Linux, Solaris, and AIX operating systems."
CITI, which is a research unit in the College of Engineering, developed the open source Linux-based reference implementation of NFSv4 that is already included in all Linux distributions. However, Peter Honeyman, a research professor in the division of Computer Science and Engineering and principal investigator of this project, notes that Windows is a critical component in the University's research cyber-infrastructure, responsible for the control of instruments in laboratories across the university, in medicine, engineering, geosciences, bioinformatics, and many other disciplines.
"So this project is especially important in helping university scientists and engineers fill a gap in the storage fabric. This partnership also shows how the university can serve as a living laboratory for the development of interoperable enterprise scale systems that meet the needs of industry and academia," he said.
by hjanssen on March 04, 2008 10:50am
I have this really odd feeling of Dejavu. The last blog I wrote (here), I started by saying that I have been very delinquent with writing blogs. So what do I do? I continue to be very delinquent with my blogs!
If anything, I am consistent! :)
So, why this sudden re-emergence of myself on port25? (Besides the fact that I just realized that it has been months since I last blogged, that is) Well we have been keeping ourselves very busy in the last few months. A lot of that work relates directly to PHP, so I wanted to talk about some of these efforts.
We have significantly increased our work in this area. And my group continues to find itself at the middle of pretty much all of these efforts. The SQL Server driver for PHP is now in its second release (Get the latest bits here). And we continue to make improvements and enhancements going forward to this driver.
We have also been working very closely with the PHP community on improving PHP performance on Windows. This is an effort that will be ongoing and probably accelerating in the months to come.
With the release of Windows Server 2008, I wanted to take a moment and highlight some of the things that have been done to make Windows an excellent PHP Deployment platform.
First of all, Microsoft late last year released FastCGI for IIS6 and IIS7. IIS7 is integrated with Windows Server. IIS7 has as part of its deployment FastCGI included (not an optional download as is the case with IIS6) this in effect means that with IIS7, Microsoft has added out of the box Microsoft-supported software designed to run PHP on IIS.
Equally as interesting, we have been working with Zend to help them certify Zend Core for Windows Server 2008. I think this makes it one of the first PHP products to be certified for WS2008. Which makes Windows PHP ready :). Imagine that, PHP running on Windows Server, fully certified!
To this end we will continue to work with Zend very closely to continue to improve PHP on Windows.
Secondarily (although not directly related to Windows Server) we have also been working with Zend to provide Cardspace functionality in the Zend Framework. You can get it here, and read more about how to use it at the Zend website here. Which is another way in which Microsoft has added support to PHP for Microsoft technologies.
This continues the close relationship with Zend into the future, and I am expecting to work with them on a whole host of other PHP related efforts going forward.
As you could read in earlier blogs by Garrett Serrack we hosted the Apache Software Foundation guys here in Redmond last week. So look for a blog from me later this week on the wrap-up of that visit.
by Peter Galli on July 06, 2009 12:27pm
I have some good news to announce: Microsoft will be applying the Community Promise to the ECMA 334 and ECMA 335 specs.
ECMA 334 specifies the form and establishes the interpretation of programs written in the C# programming language, while the ECMA 335 standard defines the Common Language Infrastructure (CLI) in which applications written in multiple high-level languages can be executed in different system environments without the need to rewrite those applications to take into consideration the unique characteristics of those environments.
"The Community Promise is an excellent vehicle and, in this situation, ensures the best balance of interoperability and flexibility for developers," Scott Guthrie, the Corporate Vice President for the .Net Developer Platform, told me July 6.
It is important to note that, under the Community Promise, anyone can freely implement these specifications with their technology, code, and solutions.
You do not need to sign a license agreement, or otherwise communicate to Microsoft how you will implement the specifications.
The Promise applies to developers, distributors, and users of Covered Implementations without regard to the development model that created the implementations, the type of copyright licenses under which it is distributed, or the associated business model.
Under the Community Promise, Microsoft provides assurance that it will not assert its Necessary Claims against anyone who makes, uses, sells, offers for sale, imports, or distributes any Covered Implementation under any type of development or distribution model, including open-source licensing models such as the LGPL or GPL.
You can find the terms of the Microsoft Community Promise here.
I told you this was good news!