Follow Us on Twitter
As Microsoft continues to deliver on its commitment to Interoperability, I have good news on the Open Source Software front: today, the OData Library for Objective-C project was submitted to the Outercurve Foundation’s Data, Languages, and Systems Interoperability gallery.
This means that OData4ObjC, the OData client for iOS, is now a full, community-supported Open Source project.
The Open Data Protocol (OData) is a web protocol for communications between client devices and RESTful web services, simplifying the building of queries and interpreting the responses from the server. It specifies how a web service can state its semantics such that a generic library can express those semantics to an application, meaning that applications do not need to be custom-written for a single source.
The Outercurve Foundation already hosts 19 OSS projects and, as Gallery Manager Spyros Sakellariadis notes in his blog post, this is the gallery’s second OData project, the first being the OData Validation project contributed last August.
“With this new assignment, we expect to involve open source community developers even more in the enhancement of seminal OData libraries,” he said.
Microsoft Senior Program Manager for OData Arlo Belshee notes in his blog post that the Open Sourcing of the OData client library for Objective C will enable first-class support of this important platform. “Combined with exiting support for Android (Odata4j, OSS and Windows Phone (in the odata-sdk by Microsoft), this release provides strong, uniform support for all major phones,” he said.
In assigning ownership of the code to the Outercurve Foundation, the project leads are opening it up for community contributions and support. “They firmly believe that the direction and quality of the project are best managed by users in the community, and are eager to develop a broad base of contributors and followers,” Belshee said.
As Microsoft continues to build and provide Interoperability solutions, Sakellariadis thanked the Open Source communities for their continued support, noting that together “we can all contribute to achieving a goal of device and cloud interoperability, of true openness.”
Congratulations to all the people involved in the PhoneGap community for the recent release of version 1.3 of their HTML5 open source mobile framework!
This release includes many new features, and you can find more details here. You may remember that we announced back in Sept that Microsoft was helping to bring Windows Phone support in PhoneGap: I am happy to say we can now checkthis box!
We’re also pleased to note that all features in PhoneGap 1.3 are now supported for Windows Phone, as you can see on their site here.
Also, beyond the core PhoneGap features, developers can enjoy a selection of PhoneGap plugins that support social networks - including Facebook, LinkedIn, Windows Live and Twitter - and a solid integration into Visual Studio Express for Windows Phone.
We have also developed further plugins to give HTML5 developers a feel for Windows Phone’s unique features like Live Tile Update and Bing Maps Search.
Please check out Jesse MacFadyen’s blog, PhoneGap’s dev lead, on his experiences developing PhoneGap on Windows Phone.
For more technical details of using the framework, see Glen's and Jesse’s technical walk thru blogs. For a quick a spin of what PhoneGap and Visual Studio allow you to do, see this WP7 and Android camera app created in 3 minutes! Phonegap bits are located here; plugins are here.
As mentioned in PhoneGap’s announcement blog post, the next PhoneGap 1.4 release will be from the Cordova incubation project at Apache. We at Microsoft are proud to be members of this project and to offer technical resources. We welcome the involvement of Adobe, IBM and RIM and look forward to collaboratively growing PhoneGap at its new home in Apache while helping evolve an open web for any device.
Microsoft’s commitment to HTML5 in IE9 has been instrumental in achieving this level of support. We are also building on our HTML5 investment through initiatives like bringing jQuery Mobile support as we outlined few weeks ago. Partnering with open source communities to bring this level of openness continues to be an important goal here at Microsoft.
So, stay tuned for more news on our support for popular mobile open source frameworks on WP7.5!
Abu Obeida Bakhach
Interoperability Strategy Program Manager
Today Microsoft is hosting the Learn Windows Azure broadcast event to demonstrate how easy it is for developers to get started with Windows Azure. Senior Microsoft executives like Scott Guthrie, Dave Campbell, Mark Russinovich and others will show how easy it is to build scalable cloud applications using Visual Studio. The event is be broadcasting live and will also be available on-demand.
For Java developers interested in using Windows Azure, one particularly interesting segment of the day is a new Channel 9 video with GigaSpaces. Their Cloudify offering helps Java developers easily move their applications, without any code or architecture changes, to Windows Azure.
This broadcast follows yesterday’s updates to Windows Azure around an improved developer experience, Interoperability, and scalability. A significant part of that was an update on a wide range of Open Source developments on Windows Azure, which are the latest incremental improvements that deliver on our commitment to working with developer communities so that they can build applications on Windows Azure using the languages and frameworks they already know.
We understand that developers want to use the tools that best fit their experience, skills, and application requirements, and our goal is to enable that choice. In keeping with that, we are extremely happy to be delivering new and improved experiences for popular OSS technologies such as Node.js, MongoDB, Hadoop, Solr and Memcached on Windows Azure.
You can find all the details on the full Windows Azure news here, and more information on the Open Source updates here.
As Microsoft’s Senior Director of Open Source Communities, I couldn’t be happier to share with you today an update on a wide range of Open Source developments on Windows Azure.
As we continue to provide incremental improvements to Windows Azure, we remain committed to working with developer communities. We’ve spent a lot of time listening, and we have heard you loud and clear.
We understand that there are many different technologies that developers may want to use to build applications in the cloud. Developers want to use the tools that best fit their experience, skills, and application requirements, and our goal is to enable that choice.
In keeping with that goal, we are extremely happy to be delivering new and improved experiences for Node.js, MongoDB, Hadoop, Solr and Memcached on Windows Azure.
This delivers on our ongoing commitment to provide an experience where developers can build applications on Windows Azure using the languages and frameworks they already know, enable greater customer flexibility for managing and scaling databases, and making it easier for customers to get started and use cloud computing on their terms with Windows Azure.
Here are the highlights of today’s announcements:
In addition to all this great news, the Windows Azure experience has also been significantly improved and streamlined. This includes simplified subscription management and billing, a guaranteed free 90-day trial with quick sign-up process, reduced prices, improved database scale and management, and more. Please see the Windows Azure team blog post for insight on all the great news.
As we enter the holiday season, I’m happy to see Windows Azure continuing on its roadmap of embracing OSS tools developers know and love, by working collaboratively with the open source community to build together a better cloud that supports all developers and their need for interoperable solutions based on developer choice.
In conclusion, I just want to stress that we intend to keep listening, so please send us your feedback. Rest assured we’ll take note!
I am thrilled to announce the availability of a new specification called SQL Database Federations, which describes additional SQL capabilities that enable data sharding (horizontal partitioning of data) for scalability in the cloud.
The specification has been released under the Microsoft Open Specification Promise. With these additional SQL capabilities, the database tier can provide built-in support for data sharding to elastically scale-out the data. This is yet another milestone in our Openness and Interoperability journey.
As you may know, multi-tier applications scale-out their front and middle tiers for elastic scale-out. With this model, as the demand on the application varies, administrators add and remove new instances of the front end and middle tier nodes to handle the workload.
However, the database tier in general does not yet provide built-in support for such an elastic scale-out model and, as a result, applications had to custom build their own data-tier scale-out solution. Using the additional SQL capabilities for data sharding described in the SQL Database Federations specification the database tier can now provide built-in support to elastically scale-out the data-tier much like the middle and front tiers of applications. Applications and middle-tier frameworks can also more easily use data sharding and delegate data tier scale-out to database platforms.
Openness and interoperability are important to Microsoft, our customers, partners, and developers, and so the publication of SQL Database Federations specification under the Microsoft Open Specification Promise will enable applications and middle-tier frameworks to more easily use data sharding, and also enable database platforms to provide built-in support for data sharding in order to elastically scale-out the data.
Also of note: The additional SQL capabilities for data sharding described in the SQL Database Federations specification are now supported in Microsoft SQL Azure via the SQL Azure Federation feature.
Here is an example that uses Microsoft SQL Azure to illustrate the use of the additional SQL capabilities for data sharding described in the SQL Database Federations specification.
-- Assume the existence of a user database called sales_db. Connect to sales_db and create a federation called orders_federation to scale out the tables: customers and orders. This creates the federation represented as an object in the sales_db database (root database for this federation) and also creates the first federation member of the federation.
CREATE FEDERATION orders_federation(c_id BIGINT RANGE) GO
-- Deploy schema to root, create tables in the root database (sales_db)
CREATE TABLE application_configuration(…) GO
-- Connect to the federation member and deploy schema to the federation member
USE FEDERATION orders_federation(c_id=0) … GO
-- Create federated tables: customers and orders
CREATE TABLE customers (customer_id BIGINT PRIMARY KEY, …) FEDERATED ON (c_id = customer_id) GO
CREATE TABLE orders (…, customer_id BIGINT NOT NULL) FEDERATED ON (c_id = customer_id) GO
-- To scale out customer’s orders, SPLIT the federation data into two federation members
USE FEDERATION ROOT … GO
ALTER FEDERATION orders_federation SPLIT AT(c_id=100) GO
-- Connect to the federation member that contains the value ‘55’
USE FEDERATION orders_federation(c_id=55) … GO
-- Query the federation member that contains the value ‘55’
UPDATE orders SET last_order_date=getutcdate()… GO
I am confident that you will find the additional SQL capabilities for data sharding described in the SQL Database Federations specification very useful as you consider scaling-out the data-tier of your applications. We welcome your feedback on the SQL Database Federations specification.
Senior Program Manager, Microsoft’s Interoperability Group
Microsoft's SQL Server team yesterday announced the availability of a preview release of the SQL Server ODBC Driver for Linux, which allows native developers to access Microsoft SQL Server from Linux operating systems.
For customers with native applications on multi-platform, the existing, reliable and enterprise-class ODBC for Windows driver (a.k.a. SQL Server Native Client, or SNAC) has been ported to the Linux platform.
You can download the driver here.
"In this release, the SQL Server ODBC Driver for Linux will be a 64-bit driver for Red Hat Enterprise Linux 5. We will support SQL Server 2008 R2 and SQL Server 2012 with this release of the driver. Notable driver features (in addition to what you would expect in an ODBC driver) include support for the Kerberos authentication protocol, SSL and client-side UTF-8 encoding. This release also brings proven and effective tools and the BCP and SQLCMD utilities to the Linux world,"said Shekhar Joshi, a Senior Program Manager on the Microsoft SQL Server ODBC Driver For Linux team.
This is another example of both Microsoft and the SQL team's commitment to interoperability.
You can read Shekhar's full blog post here, while additional information on the first release of Microsoft ODBC Driver for Linux can be found here.
Hello web and mobile developers!
As you probably noticed, jQuery Mobile version 1.0 was announced this week. We are pleased to use this exciting occasion to reinforce our commitment to supporting popular open source mobile frameworks.
Of the most recent activities, I want to highlight the work done to supporting PhoneGap by adding support for Windows Phone 7.5 (Mango), and now we are moving up the stack to improve support of jQuery Mobile on Windows Phone 7.5.
While today’s version 1 and the recent RC releases contain many features, we wanted to take a minute and highlight the collaboration we started with the jQuery Mobile team. In the last few weeks we have focused our attention on supporting Kin Blas and others in the community to improving the performance on Windows Phone 7.5.
In particular, as the RC3 blog published earlier this week outlines, Windows Phone performance has improved quite dramatically as shown by the two showcase apps:
The jQuery team has additional performance optimization tips for Windows Phone in the change log that saves additional perf time in certain scenarios.
We are pretty encouraged with this progress, and will continue working with community to bring higher levels of performance and support for jQuery features to Windows Phone... stay tuned, and congratulations again to the jQuery Mobile Team!
Interoperability Strategy Program Manager
Great news for all Node.js developers wanting to use Windows: today we reached an important milestone - v0.6.0 – which is the first official stable build that includes Windows support.
This comes some four months after our June 23rd announcement that Microsoft was working with Joyent to port Node.js to Windows. Since then we’ve been heads down writing code.
Those developers who have been following our progress on GitHub know that there have been Node.js builds with Windows support for a while, but today we reached the all-important v.6.0 milestone.
This accomplishment is the result of a great collaboration with Joyent and its team of developers. With the dedicated team of Igor Zinkovsky, Bert Belder and Ben Noordhuis under the leadership of Ryan Dahl, we were able to implement all the features that let Node.js run natively on Windows.
And, while we were busy making the core Node.js runtime run on Windows, the Azure team was working on iisnode to enable Node.js to be hosted in IIS.
Among other significant benefits, Windows native support gave Node.js significant performance improvements as reported by Ryan on the Node.js.org blog.
Node.js developers on Windows will also be able to rely on NPM to install the modules they need for their application. Isaac Schlueter from the Joyent team is currently working on porting NPM on Windows, and an early experimental version is already available on GitHub. The good news is that soon we’ll have a stable build integrated in the Node.js installer for Windows.
So stay tuned for more news on this front.
Principal Program Manager, Interoperability Strategy Team
In a couple of weeks it will be my one year anniversary here at Microsoft and I couldn’t wish for a better anniversary gift: now that Microsoft has laid out its roadmap for Big Data, I’m really excited about the role that Apache HadoopTM plays in this.
In case you missed it, Microsoft Corporate Vice President Ted Kummert earlier today announced that we are adopting Hadoop by announcing plans to deliver enterprise class Apache Hadoop based distributions on both Windows Server and Windows Azure.
This news is loaded with goodies for the big data community, broadening the accessibility and usage of Hadoop-based technologies among developers and IT professionals, by making it available on Windows Server and Windows Azure.
But there is more. Microsoft will be working with the community to offer contributions for inclusion into the Apache Hadoop project and its ecosystem of tools and technologies.
I believe that all of this will really benefit not only the broader Open Source community by enabling them to take their existing skill sets and assets use them on Windows Azure and Windows Server, but also developers, our customers and partners. It is also another example of our ongoing commitment to providing Interoperability, compatibility and flexibility.
As a proud member of the Apache Software Foundation, I personally could not be happier to see how Microsoft is willing to engage in such an important Open Source project and community.
On the more technical front, we have been working on a simplified download, installation and configuration experience of several Hadoop related technologies, including HDFS, Hive, and Pig, which will help broaden the adoption of Hadoop in the enterprise.
The Hadoop based service for Windows Azure will allow any developer or user to submit and run standard Hadoop jobs directly on the Azure cloud with a simple user experience.
Let me stress this once again: it doesn’t matter what platform you are developing your Hadoop jobs on -you will always be able to take a standard Hadoop job and deploy it on our platform, as we strive towards full interoperability with the official Apache Hadoop distribution.
This is great news as it lowers the barrier for building Hadoop based applications while encouraging rapid prototyping scenarios in the Windows Azure cloud for Big Data.
To facilitate all of this, we have also entered into a strategic partnership with Hortonworks that enables us to gain unique experience and expertise to help accelerate the delivery of Microsoft’s Hadoop based distributions on both Windows Server and Windows Azure.
For end users, the Hadoop-based applications targeting the Windows Server and Windows Azure platforms will easily work with Microsoft’s existing BI tools like PowerPivot and recently announced Power View, enabling self-service analysis on business information that was not previously accessible. To enable this we will be delivering an ODBC Driver and an Add-in for Excel, each of which will interoperate with Apache Hive.
Finally, in line with our commitment to Interoperability and to facilitate the high performance bi-directional movement of enterprise data between Apache Hadoop and Microsoft SQL Server, we have released two Hadoop-based connectors for SQL Server to manufacturing.
The SQL Server connector for Apache Hadoop lets customers move large volumes of data between Hadoop and SQL Server 2008 R2, while the SQL Server PDW connector for Apache Hadoop moves data between Hadoop and SQL Server Parallel Data Warehouse (PDW). These new connectors will enable customers to work effectively with both structured and unstructured data.
I really look forward to sharing updates on all this as we move forward. For now, check out www.microsoft.com/bigdata and check back on the DPI blog tomorrow.
I recently returned from Paris, where I attended both the annual Open Source Think Tank and Open World Forum events. It was really great getting to chat with some of the folks representing the myriad of businesses that have sprung up around Open Source solutions, and having some in-depth discussions about broad industry trends.
The Open Source Think Tank is pretty much a unique event in that it gives attendees the opportunity to examine open source and cloud evolution through detailed analysis and discussions of specific industry related case studies, as well as panels, presentations and networking opportunities with a collaborative group of folks from across the industry.
For its part Open World Forum brings together hundreds of decision-makers, developers and users from across the world to discuss Open technological, business and societal initiatives to help shape the digital future
I was happy to be able to participate in a number of panel discussions at both events. At the Think Tank, I got to brainstorm on the topic of “Open Source Ethos as an Agent of Change," which essentially looked at how closed source companies use the open source ethos to energize their companies and change how they relate to their customers, partners and employees. I was joined by Erynn Petersen of AOL and Gil Yehuda of Yahoo, and a lively conversation ensued.
From a Microsoft perspective I pointed out how we recognize the value of openness in working with a diverse array of OSS communities to help developers, customers and partners succeed in today's heterogeneous IT environments.
I noted that we now have a better appreciation for how the open source development model can be useful for our own software development as well as the potential for Microsoft technologies to be great platforms for open source applications. I also briefly talked about our increased investments in standards, interoperability and integration with Open Source Software.
The second Think Tank discussion revolved around Open Source, Open Systems and Open Standards and what that means today. Larry Augustin from SugarCRM and Yahoo's Gil Yehuda also participated, and a lively discussion ensued, a lot of which was way off topic :-)
More good news on Microsoft's commitment to Interoperability in the cloud: last week Sandy Gupta, the General Manager for Microsoft's Open Solutions Group, announced that Windows Server Hyper-V is now an officially supported hypervisor for OpenNebula.
This open source project is working on a prototype for release next month and it will soon be possible for customers to build and manage OpenNebula clouds on a Hyper-V based virtualization platform.
"Windows Server Hyper-V is an enterprise class virtualization platform that is getting rapidly and widely deployed in the industry. Given the highly heterogeneous environments in today’s data centers and clouds, we are seeing enablement of various Linux distributions including SUSE, CentOS, Red Hat, and CS2C on Windows Server Hyper-V, as well as emerging open source cloud projects like OpenStack -- and now OpenNebula," Gupta said in a blog post.
I'm heading off to Paris this weekend to participate in the annual Open Source Think Tank and Open World Forum events held in that wonderful city next week.
I'm really looking forward to chatting with all those folk interested in this space, from enthusiasts to developers and end users.
I will be joined at these events by my colleague and Technical Ambassador Craig Kitterman, as well as by Alfonso Castro, our local market interoperability program lead.
We will present technical sessions and participate in a number of panel discussions, ranging from what Open Source, Open Standards and Open Systems mean today to Open Source as an agent of change.
Our participation in these Paris events complements our existing broad engagement with OSS communities, and we look forward to meeting our friends from the PhP, Node.js, Drupal, Joomla, and WordPress.communities as well as to making a lot of new ones.
You can read more about our participation in Paris here, and we look forward to meeting those of you lucky enough to be attending in person.
Microsoft today signed a collaboration agreement with China Standard Software Corporation (CS2C), the country’s leading domestic Linux operating system provider, to jointly develop, market and sell solutions for the cloud-computing market in China.
The deal will help provide the mixed source infrastructure necessary to facilitate the rapid growth and change taking place across China, where cloud-based infrastructure is budding across cities and provinces.
The primary goal of this agreement, which was announced at a joint event in Beijing today, is to provide public and private cloud solutions to a diverse array of industries through a rich partner ecosystem.
The mixed source solutions stemming from this collaboration will be built on Microsoft’s Hyper-V Open Cloud architecture and will include support to run CS2C NeoKylin Linux Server products.
As Sandy Gupta, the General Manager for Microsoft’s Open Solutions Group, notes in his blog, Microsoft is working with CS2C to bring about a true, open architecture in the area of cloud management and automation for IT organizations throughout China.
“A cornerstone of this agreement is for CS2C-branded Linux servers to run under the Hyper-V Cloud architecture as a first class guest. CS2C and Microsoft will work together to enable CS2C Linux to run well on Hyper-V and be managed through Microsoft System Center,” Gupta says.
Microsoft and CS2C have also pledged to sponsor a joint virtual technology lab in Beijing for solution development and testing of cloud solutions that will allow customers to move to virtualization and a cloud-based IT infrastructure.
The lab will focus on the certification of CS2C NeoKylin Operating System on Windows Server 2008 R2 with Hyper-V, creating Microsoft Systems Center management packs for CS2C NeoKylin Operating System application workloads, and incorporating support for CS2C NeoKylin Operating System within the Hyper-V Cloud architecture.
As part of the collaboration, CS2C will also join the Interop Vendor Alliance, an established community of software and hardware vendors that have been working together to enhance interoperability with Microsoft systems.
In addition to establishing market and technology collaboration, the two companies have also signed a customer legal covenant agreement.
In line with Microsoft’s ongoing commitment to Interoperability, Gupta notes that “interoperable Linux and Windows offerings will empower customers to build solutions that will enable them to capitalize on opportunities to expand, grow and achieve the focus necessary to fuel innovation.”
Han Naiping, the president of CS2C, notes that this is an “important opportunity to collaborate with Microsoft to deliver comprehensive, flexible, cloud-based solutions that will serve as a platform for business growth.”
You can read more about this agreement on Sandy Gupta’s blog and in the press release.
Gianugo Rabellino, Microsoft’s Senior Director for Open Source Communities, just finished delivering his keynote at OSCON in Portland. As Gianugo is now wandering around the OSCON session and expo floor, I thought it would we useful to give you a quick recap of what he just presented.
During his keynote, Gianugo discussed how both the world and Microsoft are changing, saying that “at Microsoft we continue to evolve our focus to meet the challenging needs on the industry: we are open, more open than you may think.”
Gianugo explained that the frontiers between open source, proprietary and commercial software are becoming more and more of a blur. The point is not about whether you run your IT on an Open Source stack or a commercial stack, the important thing is how you can assemble software components and build solutions on top of them using APIs, protocols and standards. And the reality is that most IT systems are using heterogeneous components, he said.
Looking at the cloud, the blur is even more opaque. What does Open Source or Commercial mean in the cloud?
Gianugo put it this way: “In the cloud, we see just a continuous, uninterrupted shade of grey, which makes me believe it's probably time to upgrade our vision gear. If we do that, we may understand that we have a challenge ahead of us, and it's a big one: we need to define the new cornerstones of openness in the cloud. And we actually gave it a shot on this very same stage one year ago, when we came up with four interoperability elements of a cloud platform: data portability, standards, ease of migration & deployment, and developer choice.”
Finally, Gianugo talked about how Microsoft’s participation in Open Source communities is real, and he used his keynote as an opportunity to announce a few new projects and updates.
One way we interact with open source software is by building technical bridges, Gianugo said, giving an example on the virtualization front: announcing support for the Red Hat Enterprise Linux 6.0 and CentOS 6.0 guest operating systems on Windows Server Hyper-V (which follows this Linux Interoperability announcement at OSBC a few weeks ago. )
On the cloud development front, we are continuing to improve support for open source languages and runtimes, Gianugo said, announcing the availability of a new version of the Windows Azure SDK for PHP, an open source project which is led by Maarten Balliauw from RealDolmen, where Microsoft is providing funding and technical assistance.
Maarten has all the details on the new features and link to the open source code of the SDK. This announcement also includes a set of cloud rules for the popular PHP_CodeSniffer tool that Microsoft has developed to facilitate the transition of existing PHP applications to Windows Azure. The new set of rules is available on Github.
An on demand Webcast of Gianugo’s keynote will soon be available, and I’ll post the link to it here.
We've been participating in creating a roadmap for adoption of cloud computing throughout the federal government, with the National Institute for Standards and Technology (NIST) , an agency of the U.S. Department of Commerce, and the United States first federal physical science research laboratory. NIST is also known for publishing the often-quoted Definition of Cloud Computing, used by many organizations and vendors in the cloud space.
Microsoft is participating in the NIST initiative to jumpstart the adoption of cloud computing standards called Standards Acceleration to Jumpstart the Adoption of Cloud Computing, (SAJACC).The goal is to formulate a roadmap for adoption of high-quality cloud computing standards. One way they do this is by providing working examples to show how key cloud computing use cases can be supported by interfaces implemented by various cloud services available today. Microsoft worked with NIST and our partner, Soyatec, to demonstrate how Windows Azure can support some of the key use cases defined by SAJACC using our publicly documented and openly available cloud APIs.
NIST works with industry, government agencies and academia. They use an open and ongoing process of collecting and generating cloud system specifications. The hope is to have these resources serve to both accelerate the development of standards and reduce technical uncertainty during the interim adoption period before many cloud computing standards are formalized.
By using the Windows Azure Service Management REST APIs we are able to manage services and run simple operations including simple CRUD operations, solve simple authentication and authorizations using certificates. Our Service management components are built with RESTful principles and support multiple languages and runtimes including Java, PHP and .NET as well as IDEs including Eclipse and Visual Studio.
It also provides rich interfaces and functionality that provide scalable access to public, private and hosted clouds. All of the SDKs are available as open source too. With the Windows Azure Storage Service REST APIs we can use 3 sets of APIs that provide storage management support for Tables, Blobs and Queues with the same RESTful principles using the same set of languages. These APIs as well are available as open source.
We also have an example that we have created called SAJACC use case drivers to demonstrate this code in action. In this demonstration written in Java we show the basic functionality demonstrated for the NIST Sample. We created the following scenarios and corresponding code …
1. Copying Data Objects into a Cloud, the user is able to copy items on their local machine (client) and copy to the Windows Azure Storage without any change in the file; the assumptions are to have credential with a pair of account name and key. The scenario involves generating a container with a random name in each test execution to avoid possible name conflicts. The container uses the Windows Azure API. With the credential previously created the user prepares the Windows Azure Storage execution context. Then a blob container is created, with optional custom network connection timeout and retry policy, you are able to easily recover from network failure. Then we will create a block blob and transfer a local file to it. We will then compute a MD5 hash for the local file, get one for the blob and compare it to show there are equivalent and no data was lost
2. Copying Data Objects Out of a Cloud, repeats what we do from the first use case, Copying Data Objects into a Cloud. Additionally we will include another scenario, where set public access to the blob container and get its public URL; we will then as an un-identified (public) user retrieve the blob using an http GET request and save it to the local file system. We will then generate a MD5 hash for this file and compare it to the originals we used previously
3. Erasing Data Objects in a Cloud erases a data object on behalf of a user. With the credentials and data you created in the previous examples we will use the public URL of the blob and delete it by using its blob name. We will verify by using an http GET request to confirm that it has been erased.
4. VM Control: Allocating VM Instance, the user is able to create a VM image to compute on that is secure and performs well. The scenario involves creating a Java Keystore and Truststore from a user certificate to support SSL transport (described below). We will also create Windows Azure management execution context to issue commands from and create a hosted service using it. We will then prepare a Windows Azure service package and copy it to the blob we created in the first use case. We will then deploy in the hosted service using its name and service configuration information including the URL of the blob and the number of instances. We can then change the instance count to as many roles we want to execute using what we deploy and verify the change by getting status information from it.
5. VM Control: Managing Virtual Machine Instance State, the user is able to stop, terminate, reboot, and start the state of a virtual instance. We will first prepare an app to run as the Web Role in Windows Azure. The program will add a Windows Azure Drive to keep some files persistent when the VM is killed or rebooted. We will have two web pages, one where a random file is created inside the mounted drive, and another to list all the files on the drive. Then we will build and package the program and deploy the Web Role create as a hosted service on Window Azure using the portal. We will then create another program to manage the VM instance state similar to what we had done before in the previous use case, VM Control: Allocating VM Instance. We will use http GET requests to visit the first web page to create a random file on the Windows Azure Drive and the second web page to lists the files to show that they are not empty. We will then use the management execution context to stop the VM and disassociate the IP address and confirm this by visiting the second web page which will not be available. We will then use the same management execution context to restart the VM and confirm that the files in the drive are persistent between the restarts of the VM.
6. Copying Data Objects between Cloud-Providers, the user is able to copy data objects from one Windows Azure Storage account to another. This example involves creating a program to run as a worker role where a storage execution context is created. We will use the container as per the first use case, Copying Data Objects into a Cloud. We will download the blob to a local file system. We will then then create a second storage execution context and transfer the downloaded file to this new storage execution context. Then as per the first use case we will create a new program and deploy it to retrieve the two blobs and compare and verify the contents MD5 hashes are the same.
Java code to test the Service Management API
Managing API Certificates
For the Java examples (use cases 4-6), we need to have key credentials. In our download we demonstrate the Service Management API being called with an IIS certificate. We will take you through generating an X509 certificate for the Windows Azure Management API. We show the management console for IIS7 and certificate manager in Windows. Creating the self-signed server certificates and exporting them to the Windows Azure portal and generate a JKS format key store for the Java Azure SDK. We will then upload it to the Azure account and converting the keys for use in the Java Keystore and for calling the Service Management API from Java We then demonstrate the Service Management API using the Java Key tool Certificates. We will use the Java Keystore and export an X.509 certificate to the Windows Azure Management API. Then we upload certificate to an Azure account. We will then construct a new Service Management Rest object with the specific parameters and end by testing the Services Management API from Java
To get more information, the Windows Azure Storage Services REST API Reference and the Windows Azure SDK for PHP Developers are useful resources to have. You may also want to explore more with the following tutorials:
With the above tools and Azure cloud services, you can implement most of the Use Cases listed by NIST for use in SAJACC. We hope you find these demonstrations and resources useful, and please send feedback!
Jas Sandhu, Technical Evangelist, @jassand