Follow Us on Twitter
The TypeScript language is made available under the Open Web Foundation’s OWFa 1.0 Specification Agreement, and the community is invited to discuss the language specification. Microsoft’s implementation of the compiler is also available on CodePlex under the Apache 2.0 license. There you can view the roadmap, and over the next few weeks and months you’ll see the TypeScript team continue to develop on CodePlex in the open.
TypeScript is one foray into making programming languages and tooling even more productive. Pick it up, take it for a spin, and give your feedback. You can contribute by discussing the language specification or filing a bug.
Olivier Bloch Senior Technical Evangelist Microsoft Open Technologies, Inc.
by Sandy Gupta on September 29, 2009 06:15am
Microsoft recently made a significant code drop to the Apache Qpid project. For those of you who don't know, Qpid is Apache's implementation of the Advanced Message Queuing Protocol (aka AMQP), which is an exciting new reliable messaging protocol developed by some of the world's biggest messaging users (think names like JPMorgan Chase).
What we've done is a Windows Communication Foundation (WCF) Channel for AMQP. Our goal is to provide a first class AMQP experience for the .NET developer. And, since this is an Apache project we're talking about, all our code is obviously open source.
A couple of years ago the announcement of Microsoft making a major contribution to an open source project would have been sensational news, but things have moved on a little bit since then. Now it's just another day's work at Microsoft.
As the manager of this effort at Microsoft, I'd like to talk a little about what we bring to the open source table. We joined the Apache Qpid project, which was essentially focused on Linux, to help the community develop a port on Windows and integration with the .NET stack.
That was reflected both in the code itself and also in the build environment it used (autotools). One of the areas where we invested was the introduction of a cross-platform build and test environment (CMake), so as to smooth the way for cross-platform work.
We have continued to adopt our typical product development quality assurance mechanisms when working with open-source. These include team-based design and code reviews. We also use automated code-quality tools, such as StyleCop, to ensure consistent style and to detect common programming errors.
We developed the initial version of the WCF Channel on a private Subversion repository and used a private bug database for logging issues and work items. Now we have made the initial drop to the community, we intend to do all revisions in the Apache repository and switch to using the community Jira-based bug tracking system.
Today we have a group of 5 engineers working with Qpid, both vendors and full-time Microsoft employees. Over time, it's our goal for many of these folks to achieve committer status on the project.To date, we've worked on the following work items:
For me, leading AMQP initiative at Microsoft has been quite a learning experience. Our collaboration with the community has been strong and we have received full support from our executives.
Here at Microsoft we understand that AMQP can become the SMTP for Messaging. This means AMQP is going to have a huge beneficial impact on all kinds of users in the years to come, and we want to help make that happen.
Helping develop the open source AMQP reference implementation at Apache Qpid as part of a broad community effort is our way of moving the AMQP ball forward. I'll have a lot more news to report about our efforts in the months to come.
by Peter Galli on January 23, 2010 08:00am
The CodePlex team announced last night that it now supports Mercurial, a distributed source control management system. New projects created on CodePlex.com will now be able to use either Team Foundation Server or Mercurial as the source control repository.
Current project owners who want to switch to Mercurial can do so by contacting CodePlex Support with the project name, and the team will gladly assist.
Mercurial is a Distributed Version Control System (DVCS) and, unlike Team Foundation Server, DVCS has a very different model for collaborating on an open source project:
So, why the need for another option? According to Sara Ford, the Program Manager for CodePlex, adding DVCS to CodePlex has become a top feature request from users as the popularity of DVCS for open source development has grown significantly.
"Mercurial is one of the most popular distributed version control systems and offers great support for Windows based tools as well as works very well as a hosted service," she says.
by Peter Galli on November 02, 2009 03:01pm
This week the Apache Software Foundation celebrates its 10th anniversary at its annual U.S. ApacheCon 2009 event in Oakland. As such, I though it would be interesting to chat with ASF President Justin Erenkrantz about the past 10 years and what's still to come going forward.
Peter Galli: Tell me about ApacheCon, who the audience is, what the goal of the event is.
Justin Erenkrantz: Since The Apache Software Foundation (ASF) is so globally distributed, with almost 2,000 Committers around the world working on over 100 different projects, we do all of our work virtually, via public mailing lists.
As such, ApacheCon presents a unique opportunity for our community - users, contributors, and developers - to get together face-to-face. We typically try to run at least two shows a year: we're currently holding our upcoming U.S. show in Oakland, and we held ApacheCon Europe in Amsterdam earlier this year.
At ApacheCon, we have a range of trainings, talks, and MeetUps. We have half-day, full-day, and two-day trainings typically led by key developers in the project. This immersive environment allows interested parties to dive down into tremendous detail about Apache projects - popular trainings include Hadoop, Solr, Tomcat, ActiveMQ, Wicket, Lucene and, of course, our well-known HTTP Server.
In addition to the trainings, we have three days of session tracks (usually hour-long talks) covering broad topics such as: Content Technology (content management systems including Sling and Jackrabbit, as well as CouchDB and POI), Web Services (Axis and other SOA tools), OFBiz (our Enterprise Resource Planning solution), Tomcat (our popular Java servlet engine...well it does much more than that these days!), Felix (our implementation of the OSGi framework) and, of course, some talks about the HTTP Server.
One thing that we're really excited about this year is our expansion of free MeetUps in the evening. These are a great opportunity to mingle with the community in a very low-key unstructured environment focused on a single topic. You can think of a MeetUp as an all-night "birds of a feather" (BOF) sessions. In addition, we will be holding BarCamp Apache -- our two day un-conference to talk about whatever folks are interested in, as well as the Hackathon, where participants can collaborate on various code bases alongside Apache Committers. The great thing about the MeetUps, BarCamp, and Hackathon is that they're open to the public, free of charge. All are welcome!
Peter Galli: You always hear a lot about the "Apache Way." Explain this to me.
Justin Erenkrantz: As an all volunteer, non-profit organization, the ASF is regularly praised for its consistent, repeatable, open development model. This model, affectionately dubbed by some as "the Apache Way", is behind the ASF's success in scaling from a single project to 70 primary projects today.
One of our biggest challenges, as the ASF has grown to nearly 2,000 Committers, is how to teach the Apache Way to those interested in bringing new Open Source projects to the Foundation. The way to address this on a formal level is through the Apache Incubator, created to "mentor" new projects and to assist in their learning how to operate as an ASF project. ASF Members who find the candidate technology (called a "podling") worth pursuing, they can then volunteer to be a mentor to the project.
Rather than overseeing its technical development, the mentor's main responsibility to a podling is more social, by helping to pass down the traditions and culture of other projects. Over time, once the podling has demonstrated that it has learned the Apache Way and can govern itself successfully, it can become a full-fledged ASF project and graduate to a top-level project.
Anyone can submit a podling proposal to the Incubator for consideration as a new ASF project. If you have an existing Open Source project and would like to join the ASF, we encourage you to check out the Incubator, and submit your proposals to email@example.com.
Peter Galli: Microsoft has been working closely with the Apache Community for some time now. Can you talk to how that works and why our participation is important?
Justin Erenkrantz: As you know, last year Microsoft announced its Platinum Sponsorship of the ASF, which it continued this year. While we are delighted to have Microsoft's financial support as a sponsor of the Foundation, I think the more important aspect of Microsoft's relationship is that they are now contributing to a variety of Apache projects.
Since we announced the sponsorship last year, Microsoft is now contributing to at least four Apache projects: HBase, Stonehenge, QPid, and POI. This really continues the significant sea change from within the organization - Microsoft now isn't afraid of having their employees contribute to Apache projects on Microsoft's time. Committers from Microsoft sign the same legal agreements that we require from all of our contributors.
Microsoft's involvement in these specific communities range from having their employees being core contributors driving the project, to having folks contributing patches or ideas on our mailing lists, to even commissioning a third-party to contribute to our project as a work-for-hire. In other words, Microsoft is now actively participating within Apache projects in a broad range of ways.
In recent conversations with the Port25 team at Microsoft, it sounds like there are even more Apache projects that Microsoft is interested in getting involved in. We look forward to Microsoft's continued and increased contribution and participation within Apache.
As a public charity, we rely on donations from the public. Our policy is not to provide direct funding for our projects (we do not pay for contributions to any of our projects), however there are a number of indirect needs to support our projects. The biggest chunk of our budget goes towards maintaining our servers - we maintain SCM systems (currently Subversion-based), mirror distribution system (seeding a large number of volunteer mirrors), build farms, Web sites, and mailing lists.
We have key data centers at Oregon State University's Open Source Lab and SURFnet in the Netherlands. Since we have a growing number of contributors in the Pacific Rim, we're looking to expand our server presence in those regions. Through our Travel Assistance Committee, we also use our funds to help community members (typically college students) who could not otherwise attend our events - this has been a fantastically successful project in helping to encourage further participation. Finally, we also use some of funds to help spread our message - so many folks still think that the ASF is just about the HTTP Server - it's not! It's only 1 of 70 different top-level projects - so we realize we still have to do some education on that front!
Peter Galli: What are some of the most exciting projects that have been developed by the Apache community, or are currently being worked on?
Justin Erenkrantz: There are so many exciting projects that it's hard to choose from! As before, some folks think that the ASF is just about the HTTP Server: we have projects ranging from Atom/RSS parsers/producers (Abdera) to generating high-quality printable graphics via XML (XMLGraphics). Some folks don't often connect the dots and realize that projects like CouchDB, SpamAssassin, and Hadoop are all Apache projects. And, it's important to know that via our Incubator and Labs projects that we're open to shepherding even more projects.
As we celebrate our tenth anniversary, we've established ourselves as an important player in the ecosystem. We were founded on pragmatic principles, but that hasn't meant that we shouldn't have a leadership position: our Apache License version 2 is the flag-bearer for permissive Open-Source licenses and we have been a strong advocate for openness and transparency within the Java standards process. Over the next ten years, it'll be an exciting ride!
We should also point out eWeek's recent story on eleven Apache technologies that have changed computing in the last 10 years.
Peter Galli: What do you hope to see coming from the community over the next years?
Justin Erenkrantz: Our purpose in founding the ASF ten years ago was to bring the "Apache Way" to a broader community than just the initial HTTP Server. Our goal is to continue that process: we realize that developers are best at coding and shouldn't have to worry about the gnarly details - be it setting up servers, distributing files, accepting donations, handling legalese, organizing events, etc. - and just focus on creating terrific code. So, we hope to see more ideas for projects come our way through our Incubator and Labs!
In case you missed it, I just wanted to flag this blog from Jean Paoli:
I am really excited to be able to share with you today that Microsoft has announced a new wholly owned subsidiary known as Microsoft Open Technologies, Inc., to advance the company’s investment in openness – including interoperability, open standards and open source.
My existing Interoperability Strategy team will form the nucleus of this new subsidiary, and I will serve as President of Microsoft Open Technologies, Inc.
The team has worked closely with many business groups on numerous standards initiatives across Microsoft, including the W3C’s HTML5, IETF’s HTTP 2.0, cloud standards in DMTF and OASIS, and in many open source environments such as Node.js, MongoDB and Phonegap/Cordova.
We help provide open source building blocks for interoperable cloud services and collaborate on cloud standards in DMTF and OASIS; support developer choice of programming languages to enable Node.js, PHP and Java in addition to .NET in Windows Azure; and work with the PhoneGap/Cordova and jQuery Mobile and other open source communities to support Windows Phone.
It is important to note that Microsoft and our business groups will continue to engage with the open source and standards communities in a variety of ways, including working with many open source foundations such as Outercurve Foundation, the Apache Software Foundation and many standards organizations. Microsoft Open Technologies is further demonstration of Microsoft’s long-term commitment to interoperability, greater openness, and to working with open source communities.
Today, thousands of open standards are supported by Microsoft and many open source environments including Linux, Hadoop, MongoDB, Drupal, Joomla and others, run on our platform.
The subsidiary provides a new way of engaging in a more clearly defined manner. This new structure will help facilitate the interaction between Microsoft’s proprietary development processes and the company’s open innovation efforts and relationships with open source and open standards communities.
This structure will make it easier and faster to iterate and release open source software, participate in existing open source efforts, and accept contributions from the community. Over time the community will see greater interaction with the open standards and open source worlds.
As a result of these efforts, customers will have even greater choice and opportunity to bridge Microsoft and non-Microsoft technologies together in heterogeneous environments.
I look forward to sharing more on all this in the months ahead, as well as to working not only with the existing open source developers and standards bodies we work with now, but with a range of new ones.
Hello web and mobile developers!
As you probably noticed, jQuery Mobile version 1.0 was announced this week. We are pleased to use this exciting occasion to reinforce our commitment to supporting popular open source mobile frameworks.
Of the most recent activities, I want to highlight the work done to supporting PhoneGap by adding support for Windows Phone 7.5 (Mango), and now we are moving up the stack to improve support of jQuery Mobile on Windows Phone 7.5.
While today’s version 1 and the recent RC releases contain many features, we wanted to take a minute and highlight the collaboration we started with the jQuery Mobile team. In the last few weeks we have focused our attention on supporting Kin Blas and others in the community to improving the performance on Windows Phone 7.5.
In particular, as the RC3 blog published earlier this week outlines, Windows Phone performance has improved quite dramatically as shown by the two showcase apps:
The jQuery team has additional performance optimization tips for Windows Phone in the change log that saves additional perf time in certain scenarios.
We are pretty encouraged with this progress, and will continue working with community to bring higher levels of performance and support for jQuery features to Windows Phone... stay tuned, and congratulations again to the jQuery Mobile Team!
Abu Obeida Bakhach
Interoperability Strategy Program Manager
by Claudio Caldato on December 15, 2010 08:00am
As you know, Microsoft is committed to interoperability, and the IE team has previously blogged about and provided developer previews and samples showing "Same Markup" - the same HTML, CSS, and script working across browsers - in action.
Today, as part of the interoperability bridges work we do on this team, we're making available a new Firefox add-on that enables Firefox users on Windows to play H.264-encoded video on HTML5 by using the built-in capabilities found in Windows 7.
Microsoft has already been offering for several years now the extremely popular Windows Media Player plug-in for Firefox, which is downloaded by millions of people a month who want to watch Windows Media content.
This new plug-in, known as the HTML5 Extension for Windows Media Player Firefox Plug-in, is available for download here at no cost. It extends the functionality of the earlier plug-in for Firefox, and enables web pages that that offer video in the H.264 format using standard W3C HTML5 to work in Firefox on Windows. Because H.264 video on the web is so prevalent, this interoperability bridge is important for Firefox users who are Windows customers.
H.264 is a widely-used industry standard, with broad and strong hardware support. This standardization allows users to easily take what they've recorded on a typical consumer video camera, put it on the web, and have it play in a web browser on any operating system or device with H.264 support, such as on a PC with Windows 7.
H.264 is also a very well established and widely supported video compression format, developed for use in high definition systems such as HDTV, Blu-ray and HD DVD as well as low resolution portable devices. It also offers better quality at lower file sizes than both MPEG-2 and MPEG-4 ASP (DivX or XviD).
The HTML5 Extension for Windows Media Player Firefox Plug-in continues to offer our customers value and choice, since those who have Windows 7 and are using Firefox will now be able to watch H.264 content through the plug-in.
Microsoft is already deeply engaged in the HTML5 process with the W3C as we believe that HTML5 will be important in advancing rich, interactive web applications and site design.
Principal Program Manager, Interoperability Strategy Team
Microsoft and The Open Group have announced the release of Open Management Infrastructure (OMI), an open source project to further the development of a production quality implementation of the DMTF CIM and WBEM standards. The Windows Management team has a blog post covering the details of OMI and the goals of the project.
OMI (formerly known as NanoWBEM) is an implementation of the DMTF Common Information Model (CIM) standard, which defines the semantics of management information for networks, applications, and services. Here’s a high-level overview of OMI’s implementation of a CIM server:
Just as the Hardware Abstraction Layer (HAL) helped open up the x86 hardware ecosystem and enable rapid innovation across the industry, CIM-based tools such as OMI form a Datacenter Abstraction Layer (DAL) that provides a framework for interoperability between management tools across diverse platforms and devices. As noted on the Windows Server blog:
“… the growth of cloud-based computing is, by definition, driving demand for more automation, which, in turn, will require the existence of a solid foundation built upon management standards. For standards-based management to satisfy today’s cloud management demands, it must be sophisticated enough to support the diverse set of devices that are required and it must be easy to implement by hardware and platform vendors alike. The DMTF CIM and WSMAN standards are up to the task, but implementing them effectively has been a challenge. Open Management Infrastructure (OMI) addresses this problem.”
Keep an eye on The Open Group’s OMI project site for the latest news about OMI’s evolution. You can download OMI source code and documentation today (available under an Apache 2.0 open source license), and soon you’ll find information about more detailed documentation, contribution facilities, and OMI developer conferences.
Doug MahughSenior Technical EvangelistMicrosoft Open Technologies, Inc.
by hjanssen on July 22, 2009 02:02pm
Well, here is blog number two. The initial shock has worn off a bit I hope.
The feedback I have received so far has been pretty positive. This really all started in October of 2008 in a meeting with Mike Neil (GM of Hyper-V) and Tom and myself from the Open Source Technology Center (OSTC) at Microsoft.
In that meeting I proposed to Open Source the Linux Integration Components and contribute them to the Linux Kernel. And, secondly, to have the OSTC continue contributing to these IC's after they made it to the Linux Kernel. Well after some discussion, we all agreed that this was the right thing to do.
And so the whole process started inside of Microsoft. Hey, what can I say, we like to push the envelope a bit here at the OSTC, and we have a reputation to uphold!
Before I go on, I again wanted to thank the Kernel community (specifically Greg Kroah-Hartman) in helping us with explaining and guiding us through community process. It gave us a very nice jumpstart to get all of this going, and provided the groundwork for a good working relationship with the community.
I have also seen a few patches already submitted by community members, which is excellent! (Moritz Muehlenhoff gets major kudos for the first community contributed patch ) I will start submitting patches myself next week once the initial submission has stabilized a bit.
It is my plan to use the kernel as my primary development area, and of course I will continue to provide Greg with my patches. My first step is to clean up the code to make sure it fulfills all Kernel coding standards and requirements.
So, here is blog number two: what are the Linux Integration Components?
1. Overview of Linux VM with ICs
Linux Integration Components(IC) take advantages of the VMBUS and synthetic devices provided in Hyper-V to enhance the performance and usability of Linux guests running on Windows servers.
Figure: Conceptual Architecture overview of Linux guest & Hyper-V. Linux IC modules are painted in yellow color.
VSP: Virtualization Service Provider.
VSC: Virtualization Service Client.
VMBus: Data channel between VSP and VSC.
2. Linux IC modules -- VMBus and VSCs
This portion of the VSC component interacts with the Linux kernel like a regular Linux device driver.
The core portion of the VSC module is implemented based on the protocol of the corresponding VSP at Hyper-V host. The VSC core interacts with VSP via the VMBus interface.
3. Descriptions for each Linux IC module
3.1 VMBus driver (hv_vmbus.c)
The VMBus driver is a Linux kernel module. It provides both a lightweight bus driver and library functionality. As a bus driver, it registers with Linux Driver Model framework (LDM) to provide simple bus and device integration and device tree integration (sysfs). As a library, it implements the VMBus channel protocol and provide an abstraction of channel to its clients (Disk and Network VSCs).
3.2 StorVSC driver (hv_storvsc.c)
The Storage VSC interacts with the Windows Storage VSP. The "wire" protocol defined by the storage VSP determines how a VSC interacts with it. The Linux Storage VSC (LSVSC) basically abstracts the Linux I/O stack from needing to understand the Storage VSP's protocol. At the upper-edge of the LSVSC, it talks to the Linux SCSI subsystem. The Linux SCSI subsystem sees the LSVSC as a SCSI low-level driver (LLD) in Linux parlance. It passes SCSI requests (scsi_cmnd) to LSVSC which in turn converts them into the "wire" format understood by the Windows Storage VSP (VSTOR_PACKET). The bottom-edge of the LSVSC talks to Linux VMBus (LVMBUS) which in turn talks to the Windows VMBus to route the packets to the Storage VSP.
3.3 BlkVSC driver (hv_blkvsc.c)
BlkVSC (BlockVSC) supports "fast boot" and fast access to IDE disks. To enable enlightened IDE support for enhancing the performance of Linux when virtualized on Windows, a separate BlockVSC component is used as a Linux block device driver. Like StorVSC, the BlockVSC component is comprised of an upper edge wrapper that interfaces with the Linux block layer and a lower-edge through the infrastructure modules. The infrastructure modules with Hyper-V through the Linux VMBus.
3.4 NetVSC driver (hv_netvsc.c)
The network VSC send and receive network traffic between a Linux guest and Hyper-V host which has direct connection to physical network. The mechanism that this is used to accomplish is the Remote NDIS (RNDIS) protocol. Thus the communication that flows between the VSP and the VSC primarily happens over the RNDIS protocol which then is packaged and forwarded as payload over to the other side over NetVSP / VMBus protocol.
4. Linux IC's, Location in the Kernel tree
Hopefully you now have a better idea what they are. But where in the kernel tree can you find them?
Well, you can find sources in linux-next tree in /drivers/staging/hv directory.
And the git repository you can find them in right now is:
Or give this command (assuming your system is set up correctly) to download this repository to your machine:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/sfr/linux-next.git <your local name>
Since the IC's are part of the kernel now, we follow the normal community process of getting this all migrated into Linus mainline kernel.
Great news for all Node.js developers wanting to use Windows: today we reached an important milestone - v0.6.0 – which is the first official stable build that includes Windows support.
This comes some four months after our June 23rd announcement that Microsoft was working with Joyent to port Node.js to Windows. Since then we’ve been heads down writing code.
Those developers who have been following our progress on GitHub know that there have been Node.js builds with Windows support for a while, but today we reached the all-important v.6.0 milestone.
This accomplishment is the result of a great collaboration with Joyent and its team of developers. With the dedicated team of Igor Zinkovsky, Bert Belder and Ben Noordhuis under the leadership of Ryan Dahl, we were able to implement all the features that let Node.js run natively on Windows.
And, while we were busy making the core Node.js runtime run on Windows, the Azure team was working on iisnode to enable Node.js to be hosted in IIS.
Among other significant benefits, Windows native support gave Node.js significant performance improvements as reported by Ryan on the Node.js.org blog.
Node.js developers on Windows will also be able to rely on NPM to install the modules they need for their application. Isaac Schlueter from the Joyent team is currently working on porting NPM on Windows, and an early experimental version is already available on GitHub. The good news is that soon we’ll have a stable build integrated in the Node.js installer for Windows.
So stay tuned for more news on this front.
We are excited to be attending and participating at Node Summit in San Francisco this week.
Among those Microsoft staffers on site are Server & Tools Corporate Vice President Scott Guthrie - who participated on a panel about Platform as a Service this morning and also gave a keynote address - and Gianugo Rabellino, the Senior Director for Open Source Communities, who was on a panel discussing the importance of cross-platform.
You can read more about Scott's keynote on the Windows Azure blog here.
As this work continues inside of Microsoft as well as with the Node.js community and our partner ecosystem, new and exciting capabilities are coming available allowing Node.js developers to have great experiences on the Windows platform.
Today, during his keynote, Scott Guthrie demonstrated how easy it is to get up and running with Node.js on Windows and Windows Azure, while our partners at Cloud9 showcased new tooling experiences that provide even greater flexibility to Node.js for developers who want to build for Windows Azure.
Microsoft has been closely partnering with Joyent for some time now to port Node.js to Windows. We have built an IO abstraction library with them that can be used to make the code run on both Linux and Windows.
We also recently released the Windows Azure SDK for Node.js as open source, available on Github. These libraries are the perfect complement to our recently announced contributions to Node.js and provide a better Node.js experience on Windows Azure. The Windows Azure Developer Center provides documentation, tutorial, samples and how-to guides to get started with Node.js on Windows Azure.
The Joyent team also recently updated the Node Package Manager for Windows (NPM) code to allow use of NPM on Windows. NPM is an essential tool for Node.js developers so now having support for it on Windows we have a better development experience on Windows.
We are also working with the Joyent team on improving the development experience by leveraging the power of Microsoft Development tools and documentation that will make easier for developers to use Node.js APIs.
And, relatedly, we have also been working closely with 10Gen and the MongoDB community in the past few months, and MongoDB already runs on Windows Azure. If you’re using the popular combination of Node.js and MongoDB, a simple straightforward install process will get you started on Windows Azure. You can learn more here.
Our interest in, and support for Node.js is just one of the ways in which Windows Azure is continuing on its roadmap of embracing Open Source Software tools developers know and love, by working collaboratively with the open source community to build together a better cloud that supports all developers and their need for interoperable solutions based on developer choice.
As Microsoft continues to provide incremental improvements to Windows Azure, we remain committed to working with developer communities.
We also clearly understand that there are many different technologies that developers may want to use to build applications in the cloud: they want to use the tools that best fit their experience, skills, and application requirements, and our goal is to enable that choice.
All of this delivers on our ongoing commitment to provide an experience where developers can build applications on Windows Azure using the languages and frameworks they already know, enable greater customer flexibility for managing and scaling databases, and making it easier for customers to get started and use cloud computing on their terms with Windows Azure.
I can hardly believe it’s been almost four months since we announced the foundation of Microsoft Open Technologies, Inc. Boy, have we been busy: Redis, JQuery Mobile, OData, HTTP 2.0, Doctrine, Symfony, Apache Solr, Windows Azure tools for Mac and Linux, Eclipse, Apache Cordova, MongoDB, CouchDB, EntityFramework, ASP.net MVC 4, Web API, Web Pages, WebRTC, Wordpress… the list goes on, and the road ahead of us is nothing short of amazing. We are heading towards one of the most important years in Microsoft history and everyone in MS Open Tech is excited and busy advancing our journey into openness.
We have come to a point where we really could use some help, and I believe we have quite a few great opportunities to join our team. We are currently looking for developers, program managers, technical diplomats and evangelists: what those profiles have in common is a strong interest and experience in open technologies (Open Standards and Open Source), interoperability and community engagement. We strongly believe in code doing the talking for us: we are pragmatic, agile, focused on real-life scenarios and deeply engaged in open conversations with the community at large.
We are having tons of fun pursuing our passion for solving customer and developer issues by creating innovative solutions and technical bridges between Microsoft and non-Microsoft technologies, and we want more of the same. Please visit the Microsoft careers website to learn more about the open positions and by all means do get in touch: we would love to talk to you.
Good news for all you Java developers out there: I am happy to share with you the availability of Windows Azure libraries for Java that provide Java-based access to the functionality exposed via the REST API in Windows Azure Service Bus.
You can download the Windows Azure libraries for Java from GitHub.
This is an early step as we continue to make Windows Azure a great cloud platform for many languages, including .NET and Java. If you’re using Windows Azure Service Bus from Java, please let us know your feedback on how these libraries are working for you and how we can improve them. Your feedback is very important to us!
You may refer to Windows Azure Java Developer Center for related information.
Openness and interoperability are important to Microsoft, our customers, partners, and developers and we believe these libraries will enable Java applications to more easily connect to Windows Azure, in particular the Service Bus, making it easier for applications written on any platform to interoperate with each another through Windows Azure.
Senior Program Manager, Microsoft’s Interoperability Group
More Open Source goodness from Microsoft today, with the announcement that we are open sourcing ASP.NET MVC 4, ASP.NET Web API, ASP.NET Web Pages v2 (Razor) - all with contributions - under the Apache 2.0 license.
You can find the source on CodePlex, and all the details on Scott Guthrie's blog.
“We will also for the first time allow developers outside of Microsoft to submit patches and code contributions that the Microsoft development team will review for potential inclusion in the products,” Guthrie says. “We announced a similar open development approach with the Windows Azure SDK last December, and have found it to be a great way to build an even tighter feedback loop with developers – and ultimately deliver even better products as a result.”
You can now browse, sync and build the source tree of ASP.NET MVC, Web API, and Razor here.
In short, as Principal Program Manager Scott Hanselman notes in his blog about all this goodness: Open Source = Increased Investment. ASP.NET is a part of .NET, it will still ship with Visual Studio. It's the same ASP.NET, managed by the same developers with the same support.
It is also very important to note, as Guthrie points out, that ASP.NET MVC, Web API and Razor will continue to be fully supported Microsoft products that ship both standalone as well as part of Visual Studio (the same as they do today).
“They will also continue to be staffed by the same Microsoft developers that build them today (in fact, we have more Microsoft developers working on the ASP.NET team now than ever before),” he says. “Our goal with today’s announcement is to increase the feedback loop on the products even more, and allow us to deliver even better products. We are really excited about the improvements this will bring.”
We've been participating in creating a roadmap for adoption of cloud computing throughout the federal government, with the National Institute for Standards and Technology (NIST) , an agency of the U.S. Department of Commerce, and the United States first federal physical science research laboratory. NIST is also known for publishing the often-quoted Definition of Cloud Computing, used by many organizations and vendors in the cloud space.
Microsoft is participating in the NIST initiative to jumpstart the adoption of cloud computing standards called Standards Acceleration to Jumpstart the Adoption of Cloud Computing, (SAJACC).The goal is to formulate a roadmap for adoption of high-quality cloud computing standards. One way they do this is by providing working examples to show how key cloud computing use cases can be supported by interfaces implemented by various cloud services available today. Microsoft worked with NIST and our partner, Soyatec, to demonstrate how Windows Azure can support some of the key use cases defined by SAJACC using our publicly documented and openly available cloud APIs.
NIST works with industry, government agencies and academia. They use an open and ongoing process of collecting and generating cloud system specifications. The hope is to have these resources serve to both accelerate the development of standards and reduce technical uncertainty during the interim adoption period before many cloud computing standards are formalized.
By using the Windows Azure Service Management REST APIs we are able to manage services and run simple operations including simple CRUD operations, solve simple authentication and authorizations using certificates. Our Service management components are built with RESTful principles and support multiple languages and runtimes including Java, PHP and .NET as well as IDEs including Eclipse and Visual Studio.
It also provides rich interfaces and functionality that provide scalable access to public, private and hosted clouds. All of the SDKs are available as open source too. With the Windows Azure Storage Service REST APIs we can use 3 sets of APIs that provide storage management support for Tables, Blobs and Queues with the same RESTful principles using the same set of languages. These APIs as well are available as open source.
We also have an example that we have created called SAJACC use case drivers to demonstrate this code in action. In this demonstration written in Java we show the basic functionality demonstrated for the NIST Sample. We created the following scenarios and corresponding code …
1. Copying Data Objects into a Cloud, the user is able to copy items on their local machine (client) and copy to the Windows Azure Storage without any change in the file; the assumptions are to have credential with a pair of account name and key. The scenario involves generating a container with a random name in each test execution to avoid possible name conflicts. The container uses the Windows Azure API. With the credential previously created the user prepares the Windows Azure Storage execution context. Then a blob container is created, with optional custom network connection timeout and retry policy, you are able to easily recover from network failure. Then we will create a block blob and transfer a local file to it. We will then compute a MD5 hash for the local file, get one for the blob and compare it to show there are equivalent and no data was lost
2. Copying Data Objects Out of a Cloud, repeats what we do from the first use case, Copying Data Objects into a Cloud. Additionally we will include another scenario, where set public access to the blob container and get its public URL; we will then as an un-identified (public) user retrieve the blob using an http GET request and save it to the local file system. We will then generate a MD5 hash for this file and compare it to the originals we used previously
3. Erasing Data Objects in a Cloud erases a data object on behalf of a user. With the credentials and data you created in the previous examples we will use the public URL of the blob and delete it by using its blob name. We will verify by using an http GET request to confirm that it has been erased.
4. VM Control: Allocating VM Instance, the user is able to create a VM image to compute on that is secure and performs well. The scenario involves creating a Java Keystore and Truststore from a user certificate to support SSL transport (described below). We will also create Windows Azure management execution context to issue commands from and create a hosted service using it. We will then prepare a Windows Azure service package and copy it to the blob we created in the first use case. We will then deploy in the hosted service using its name and service configuration information including the URL of the blob and the number of instances. We can then change the instance count to as many roles we want to execute using what we deploy and verify the change by getting status information from it.
5. VM Control: Managing Virtual Machine Instance State, the user is able to stop, terminate, reboot, and start the state of a virtual instance. We will first prepare an app to run as the Web Role in Windows Azure. The program will add a Windows Azure Drive to keep some files persistent when the VM is killed or rebooted. We will have two web pages, one where a random file is created inside the mounted drive, and another to list all the files on the drive. Then we will build and package the program and deploy the Web Role create as a hosted service on Window Azure using the portal. We will then create another program to manage the VM instance state similar to what we had done before in the previous use case, VM Control: Allocating VM Instance. We will use http GET requests to visit the first web page to create a random file on the Windows Azure Drive and the second web page to lists the files to show that they are not empty. We will then use the management execution context to stop the VM and disassociate the IP address and confirm this by visiting the second web page which will not be available. We will then use the same management execution context to restart the VM and confirm that the files in the drive are persistent between the restarts of the VM.
6. Copying Data Objects between Cloud-Providers, the user is able to copy data objects from one Windows Azure Storage account to another. This example involves creating a program to run as a worker role where a storage execution context is created. We will use the container as per the first use case, Copying Data Objects into a Cloud. We will download the blob to a local file system. We will then then create a second storage execution context and transfer the downloaded file to this new storage execution context. Then as per the first use case we will create a new program and deploy it to retrieve the two blobs and compare and verify the contents MD5 hashes are the same.
Java code to test the Service Management API
Managing API Certificates
For the Java examples (use cases 4-6), we need to have key credentials. In our download we demonstrate the Service Management API being called with an IIS certificate. We will take you through generating an X509 certificate for the Windows Azure Management API. We show the management console for IIS7 and certificate manager in Windows. Creating the self-signed server certificates and exporting them to the Windows Azure portal and generate a JKS format key store for the Java Azure SDK. We will then upload it to the Azure account and converting the keys for use in the Java Keystore and for calling the Service Management API from Java We then demonstrate the Service Management API using the Java Key tool Certificates. We will use the Java Keystore and export an X.509 certificate to the Windows Azure Management API. Then we upload certificate to an Azure account. We will then construct a new Service Management Rest object with the specific parameters and end by testing the Services Management API from Java
To get more information, the Windows Azure Storage Services REST API Reference and the Windows Azure SDK for PHP Developers are useful resources to have. You may also want to explore more with the following tutorials:
With the above tools and Azure cloud services, you can implement most of the Use Cases listed by NIST for use in SAJACC. We hope you find these demonstrations and resources useful, and please send feedback!
Jas Sandhu, Technical Evangelist, @jassand