Follow Us on Twitter
by Peter Galli on April 06, 2010 12:21pm
Last summer, a new team within Microsoft, Education Labs, debuted the popular Live Services Plug-in for Moodle. Part of the charter for that team is to listen and quickly respond to educators' feedback so, when the team started to hear the positive feedback for that tool, they reached out to find out what other needs educators had with respect to Moodle, an open-source learning management system.
They quickly realized that, while educators liked the efficiency and time savings of the single sign-on that tool provided users of both Moodle and Live@edu, they also wanted other critical actions to be made simpler and faster. For example, they wanted to save time when using Office and Moodle together. Additionally, school administrators wanted to find out how to get the robustness of the SharePoint platform to underlie Moodle.
Today, the Education Labs team responded and launched a free Office Add-in for Moodle, as well as releasing a white paper on how to integrate SharePoint with Moodle.
Office Add-in for Moodle: As described in this blog post, this tool is easy to install and brings saving an Office document to Moodle down from some eight steps to around four. When uploading a lot of files to one or more Moodles, that can equate to a lot of time savings. As many educators use Moodle to upload their course content at the beginning of the semester, that upload now just got easier and faster.
You can download the tool, which is available in 6 languages, here.
In addition to saving Office documents to a Moodle, educators frequently need to update or edit those files on Moodle. Until now, if you needed to make changes to an Office document on a Moodle, you could click on it to open it, but then you had to save it to the desktop and then have to upload it again. Now, with the new Office Add-in installed, that eight step process goes to just one step: Save!
You can watch the Channel 9 video here.
White paper on integrating SharePoint as the file system for Moodle: In contrast to the Office Add-in, which is a free download of an actual piece of software, those schools already using SharePoint and Moodle have all the software they require.
They just need to follow the instructions in the white paper to give IT pros the ability to restore files accidentally overwritten or deleted by teachers using versioning and/or the recycle bin and/or the capabilities of SharePoint. In addition, SharePoint brings file search capabilities that Moodle does not have.
And, speaking of bulk upload, an educator can use SharePoint to add multiple files simultaneously to their Moodle. Note that even with the add-in, they will still upload a file at a time.
These initiatives all underscore how Microsoft is focused on providing compelling solutions that meet user needs, and that those solutions will continue to use open source tools like Moodle.
Today Scott Guthrie blogged about new releases of Microsoft’s Web developer tools that reflect a snapshot of improvements and contributions from the open source community and Microsoft Open Technologies Hub (The Hub). The latest updates from ASP.NET SignalR and Web API are good to go thanks to our cool collaboration.
All code submissions met a high bar before being merged into the source. These submissions were reviewed and tested by The Hub development team to ensure the project maintains the high quality and reliability that all of our customers demand.
Once shipped, these products are now officially fully supported by Microsoft Corp. and backed by its lifecycle. This approach is unique - allowing rapid open source innovation, while also providing continuity for Microsoft’s business customers.
Here’s a quick overview of the latest products and features, along with links to their open source repository homes. Please keep the feedback coming so we can continue to make these tools better together.
ASP.NET SignalR provides real-time web functionality to applications, and may best be expressed by the description on the ASP.NET SignalR website:
“ASP.NET SignalR is a new library for ASP.NET developers that makes it incredibly simple to add real-time web functionality to your applications. What is "real-time web" functionality? It's the ability to have your server-side code push content to the connected clients as it happens, in real-time.”
The Hub support for ASP.NET SignalR comes with a long-term roadmap. As with the other open source projects in our portfolio, The Hub is dedicated to maintaining a high level of development resources for ASP.NET SignalR, as well as making the customer feedback loop better to allow growth of customer usage.
ASP.NET SignalR has an active community. Community contributions can be submitted to the ASP.NET SignalR GitHub repository. All code submissions will be reviewed and tested by The Hub to ensure the project remains high quality and reliable. Before accepting contributions a contributor must sign a contribution agreement. A contributor then submits their patch, which, if accepted, will be merge into the source.
ASP.NET Web API
The ASP.NET Web API now includes support for OData endpoints, with support for JSON.Light and custom conventions. Automated help page generation allows developers to quickly and easily create documentation for web APIs.
Get started with OData at http://www.asp.net/web-api/overview/odata-support-in-aspnet-web-api.
More details on automated help page generation can be found at http://blogs.msdn.com/b/yaohuang1/archive/2012/08/15/introducing-the-asp-net-web-api-help-page-preview.aspx.
Here at The Hub we are very excited to see new projects and updates continue to roll out with the help of the open source community. With your participation, we’re continuing to build open source engineering best practices. As we go ahead, we are looking forward to working even closer with open source projects and communities.
by Sam Ramji on October 27, 2008 09:00am
Today at PDC in Los Angeles, Ray Ozzie unveiled the Azure Services Platform, which will enable developers to build the next generation of applications - spanning all the way from the cloud to the enterprise data center. My team's focus has been on making sure that this platform treats open source development technologies as first-class citizens.
A key components of the Azure Services Platform is Windows Azure, an infrastructure that provides core capabilities such as virtualized computation, scalable storage, and automated service management. Developers will be able to build or extend parts or complete service-based applications using Live Services, .Net Services and SQL Services.
They will also be able to choose from a range of open source development tools and technologies, and be able to access Azure services using a variety of common internet standards, including HTTP, REST, WS* and Atom.
The Azure platform's goal is to support all developers and their choice of IDE, language and technology. We are also providing programmable components that can be consumed by other applications, and Microsoft is funding and sponsoring open source software development kits to enable Java and Ruby developers to take advantage of Azure.
This is significant as this is the first time we are delivering cross-platform software development kits at the same time as Microsoft Developer Network software development kits.
We are also funding these open source projects, under the BSD licensing model, in collaboration with Thoughtworks Inc. and Schakra Inc., and they will be run on open source portals RubyForge and SourceForge.
Much of this interoperability work was undertaken by Jean Paoli, the General Manager for Interoperability Strategy, and his team, including Vijay Rajagopalan, the Principal Architect for Interoperability Strategy, so a big thanks is due to them on this front.
In addition, as part of Microsoft's commitment to openness and working with open source communities, I asked the Open Source Technology Center (led by Tom Hanrahan) to come up with some specific examples that show how open source communities can access Windows Azure.
This work has allowed us to deliver several ‘proofs of concept' which show open source developers that they can create applications that run as services and have access to services in the cloud. These ‘proofs of concept' demonstrate that:
Specific to Gallery, we've done two simple things: we created wrappers to convert the Windows Azure API to PHP objects, and we created a Windows Azure subclass inherited from the Windows NT Platform class. The net of all this is that, with a small amount of code, we were able to connect one of the top PHP application to Windows Azure, specifically, photo images stored as BLOBs in the cloud.
Finally, Microsoft is also going to publish the "M" language specification, including MSchema, MGrammar and MGraph, under the Open Specification Promise. This will facilitate the interoperability of the "Oslo" declarative modeling language, codenamed "M," with prominent industry standards such as WS* specifications, XML formats, industry protocols and security standards.
Stay tuned, because there's more to come.
by Peter Galli on January 26, 2009 08:48pm
Microsoft has released more source code under an OSI-approved license: this time it has made the source code for the Web Sandbox runtime available under the Apache 2.0 open source license.
The Web Sandbox project explores how to advance the web platform to improve security, isolation, quality of service and extensibility capabilities for web developers and website users.
More information on the licensing details, as well as comprehensive documentation for experimenting and integrating with the Web Sandbox, can be found here.
But, while developers are being encouraged to help define and refine the Web Sandbox, it is not recommended for those developers creating production sites as it is still under development.
The Web Sandbox was created in response to limitations found in the current web platform, and is designed to explore potential solutions. Having a more secure and robust architecture as a foundational building block will help drive the next wave of Web innovation.
Since the initial release of Web Sandbox at PDC 2008, the team has received a lot of useful feedback from the web security community, and has also been collaborating with a number of customers, partners and the standards communities, all of whom want to adopt the technology when it is ready.
The goal? An open and interoperable standard that will help foster interoperability with complementary technologies like script frameworks and drive widespread adoption of the Web Sandbox.
This move is good news for Microsoft and the open source communities. But, it is important to note that while an Apache license is being used, the Web Sandbox project is not an Apache Software Foundation project and is not sponsored or endorsed by the ASF.
Microsoft does, however, already have an active relationship with the ASF. In fact, last year the company announced it had become a sponsor of the ASF so as to help enable the Foundation pay administrators and other support staff so that its developers can focus on writing great software.
Sam Ramji, the senior Director of Platform Strategy at Microsoft, also delivered a keynote address at ApacheCon in New Orleans last November.
Microsoft's Interoperability Technical Strategy Team already participates as a code contributor to the Apache Stonehenge incubator project; the company has also contributed a patch to ADOdb, a popular data access layer for PHP used by many applications and which is licensed under the LGPL and BSD; while Microsoft's Powerset team contributes to HBase, an open-source, column-oriented, distributed database written in Java.
by MichaelF on August 22, 2006 03:35pm
Just returning from Linux World in San Francisco, and virtualization was once again the topic du jour. A lot of you outside of the technology vendor-sphere (where we like to speak in weird acronyms and corporate buzzwords), might wonder why Microsoft and many others can’t stop talking about virtualization. Go to any IT conference today and it’s highly likely there will be at least some sessions, if not a bevy of keynote speeches, on the topic of virtualization. These are usually accompanied by marketecture diagrams of lego-block like pictures showing different operating systems all running in some combination on top of a single physical server. Having once worked at IBM, I’m long familiar with the idea of virtualization, often called ‘logical partitioning’ in IBM mainframe speak. However, the reason why there’s so much discussion around virtualization today is because it is becoming much more widely available at a much better value than it has in past. Intel and AMD have improved their microprocessors to make them virtualization aware (in the past, virtual machine managers had to do all sorts of silliness to get around the very virtualization unaware x86 instruction set). This has allowed virtual machine software developers to build powerful technologies, often called hypervisors, that can reside in the operating system itself, allowing for much more efficient, reliable and seamless virtualization of one operating system or systems on top of the ‘host’ operating system.
Cool science project or is there any real use for this stuff? Let me give you a simple example of how we’ve used this here in our Open Source Labs. We provide quite a few different types of Linux distributions of various version levels and hardware architectures for testing and analysis, probably over fifty or so all told. Typically, you would use a single server (or even a single PC) for each operating system, which would mean about fifty different machines. Each of those machines requires power, cooling, new parts, maintenance, and so on. The costs add up quickly; in some data centers I’ve seen, power and cooling can be over half of the total operations costs year over year. In our lab, we can run almost all of these Linux distributions on one server, a four-way Opteron-based HP server with eight gigs of memory and a lot of disk. This is for testing, so I wouldn’t run this many virtual guest images on anything with heavy production workloads, but you get the idea. Bottom line, I save money and time (particularly in systems management).
I’ve also spoken with customers who are using virtualization for disaster recovery and backup scenarios, new deployment scenarios where a call center or branch office can be ‘installed’ with virtualized images in a fraction of the time as traditional server installs, and scenarios where testing and quality assurance groups can do large, diverse and automated testing of hardware and software across dozens of types of operating system configurations. IDC forecasted that 45 percent of new servers purchased this year will be virtualized.
Virtualization is a critical part of the Microsoft strategy, and we have been in this business for a while with our Virtual PC and Virtual Server 2005 products. Today, Virtual Server 2005 R2 is available as a free download. We’ve also opened up the specifications of our Virtual Hard Disk (VHD) Image format with Virtual Server 2005. You can use this specification to learn how to access (read & modify) the data stored in a VPC or Virtual Server virtual hard disk. The VHD format spec is available under a royalty-free license.
We are making even larger investments with our ‘Viridian’ hypervisor and System Center Virtual Machine Manager (code named ‘Carmine’) projects. These are the names for our Windows Server Longhorn virtualization hypervisor and virtualization management product, respectively. You can download the beta of System Center Virtual Machine Manager today. From what I’ve seen thus far in the development of these products, you can expect some great software from us in this area. You may want to check out Mike Neil’s post about how we announced and demoed much of this at WinHec this year – Mike also has a link to a video of the WinHec virtualization demos from Bill Gates’ keynote.
Related to this, we recently announced an important partnership between Microsoft and XenSource. XenSource is the company around the open source Xen project – the leading virtualization technology in Linux. Peter Levine, CEO of XenSource, discussed our partnership in his Linux World San Francisco keynote. Together with XenSource we will be working on enabling great virtualization between Windows and Linux, which is significant for customers running heterogeneous environments looking to consolidate servers and to take advantage of the new deployment scenarios – like I described above – in the future. This work will be part of our Longhorn server plans, taking advantage of our virtualization technology, Viridan. I’m personally very excited by this partnership and this is an indicator of how we think about our long term product roadmaps vis a vis interoperability.
There is a lot happening in this area of virtualization and I think it’s one of the most important change agents in our industry. Sure there will be all sorts of hype, which is typical of where we’re at in this adoption curve, but I’ve seen how this can save money/time in my own labs and I’ve talked with customers who are finding similar advantages. Exciting times indeed. -bill
by admin on May 24, 2006 06:11pm
Submitted by: Alexandre Ferrieux
I'd like to describe what is my highest frustration at the unix-Windows boundary: the lack of 'file descriptor abstraction' in Windows.
In unix everything is a file descriptor, on which you simply use read(), write(), and select() regardless of the underlying reality (files,pipes,sockets,devices,pseudoterminals).
In windows you have a set of API for every new type, with a few bridges here and there with limited support (not even talking of Windows CE).
Here is my point: this may look like just aesthetic considerations (the sheer beauty of having fewer syscalls is irrelevant to the end user). But there is one catch: when it comes to *mixing* all these things together, complexity explodes in the Windows case, and in my case there are true show-stoppers.
More precisely: let's try single-threaded, event-driven programming with select()/poll()/WaitForMultipleObjects(). In unix it amounts to giving a list of file descriptors. In Windows it is superficially the same (with handles), but it is *not*, because many handle types are just not waitable. To circumvent that, of course there is overlapped IO. But it is only possible when you open the handle yourself (to allow overlapped mode), not for one you inherit from the parent (like stdin).
Then the only possible workaround is to spawn extra threads doing their blocking IO or type-specific wait. But when it comes to resource-challenged environments (like WinCE), spawning an extra thread is sometimes not an option.
(1) Are MS aware of such limitations in WinCE and even XP ?
(2) Any smart workaround to save me right now ?
Answer (Jeffrey Snover, Architect: Administration Experience Platform):
PowerShell provides a similar abstraction on top of the OS and are working with feature teams to get providers. We call these namespace providers. It is slightly different than the UNIX model but we think it is more powerful. Our design center was admin scripting so we need to provide these abstractions against the Registry, WMI, AD, SQL, CERT STORES, etc.
If you download PowerShell and explore the concept of DRIVES (Get-PSDRIVE) then the concept will become a little clearer.
I'll monitor the comments here if you have any additional questions/feedback.
by jcannon on March 09, 2007 04:08pm
In our continuing series of papers describing both the research undertaken by the Open Source Software Lab, and technical tips, here is the latest networking configuration technical analysis.
Abstract: This document provides the reader with an analysis of VPN functionality within the Linux operating system. Specifically, it provides a breakdown of VPN components and a description of what is available to Linux Administrators, in terms of manageability and functionality. It also provides a set of HOW-TO’s in the area’s of VPN and IPsec.
by jcannon on July 18, 2006 02:15pm
There is a buzz word floating out there – “business readiness”. It seems everyone (including people here at Microsoft) are trying to capture something important to organizations and people that are responsible for selecting, deploying and maintaining software for businesses. What does it really mean though?
Does it mean that a software package, distribution or application meets a benchmark? Does it mean that it is supportable without getting the Big Three consulting companies involved? Does it mean that all its functionality has been tested using regression test cases? Does it mean performance and scalability of the software meets needs? Does it mean that the software will be kept alive into the future by a vibrant community?
In my opinion it means all of the above.
So, what is the problem?
The problem is that business readiness is in the eye of the beholder! (Definition of beholder – the dude who happens to be holding the software when the music stops!)
I think this is a complex problem for two reasons
I will concentrate on the first point – how do you objectively measure business readiness, and suggest a way to look at this. This is not a recipe, just a few thoughts on what we should pay attention to. Hopefully you can dive into the suggested links and find stuff that helps you evaluate the business readiness of some software you are considering.
There are many levels at which software must be evaluated – I assume here that the functionality of software is not the issue. Of course this is a big assumption, but the evaluation of software “features” is a better understood art than the non-functional aspects of software. (There is even a term called “non-functional requirements” while doing requirements and specifications – I never quite got my head around how something that didn’t function could be a requirement!).
What is the state of the art? This is a question that is very hard to answer. For any piece of software the best most people can do is to compare it to its competitors in the marketplace. Most organizations that use open source would not have the luxury of having the commercial software to compare against. They would have to rely on word of mouth or other such imperfect evaluations. Even for most commercial software it is hard to get a good grasp of how that software compares with other software.
There are some organizations such as ISBSG (International Software Benchmarking Standards Groups) that is a non-commercial organization that collects data about software projects and quality. This data is submitted voluntarily by organizations that are software organizations all across the world in many different areas of software. The software for which such data is submitted is largely proprietary and commercial.
A good use of ISBSG data would be to compare defect density within an open source project to the benchmark for that kind of application within the ISBSG data. This would serve as an indicator of the quality of the open source software.
Other data available includes “cost per function point” for a project – this can help evaluate if the cost of the project/product to your organization is close to the “standard” price for good quality projects for the application area chosen.
Evaluating the software Once the gold standard is known other evaluation criteria for the software at hand can be applied. The gold standard provides an quantitative upper bound in terms of number of defects and cost. But IT departments do not run on cost alone….
For open source software there are a number of evaluation benchmarks/certifications being made available. However, the criteria used to evaluate open source doesn’t exist in a vacuum – it is based on hard earned lessons in software development in general. I think that these criteria apply to all software whether open source or commercial software.
Some of the standards bodies out there include:
There is nothing stopping you from considering criteria from each of those models to evaluate the “business readiness” of the software you are concerned with. I suspect that any good model will show comparable results, or the discordant models will fall by the wayside!
Show me the money In their “Expert Letter” ,CAP Gemini - developers of the OMMM model, try to make the point (somewhat unconvincingly in my opinion) that because commercial software is developed differently from open source it has to be evaluated differently.
In my opinion, its all about the value the software provides. If the value can be derived down to dollars, that may be the best way of convincing people.
Khaled El-emam, has this cool ROI process that starts with software metrics such as number of bugs and ends up with a dollar calculation about how much a software product/project will cost the users in terms of “cha-ching”. Maybe every product needs to be put through this “business readiness” measurement!
I am now thinking about visualizing the business readiness using some cool graphic tools – “be the software, be the soooooooftware” (apologies to “Caddyshack”!)
by Peter Galli on February 01, 2010 09:48am
Great news about the .Net Micro Framework, which Microsoft announced in November was being open sourced and made available under the Apache 2.0 license. The community development site has just been launched, which is focused on supporting the collaborative development of the Framework.
"This site is designed to be open just like our product is. There are lots of ways that you can use this site directly. I am hoping that this becomes the focus of a lively community interchange on the platform so give it a try. As always, your ideas and suggestions on how we can make the site more useful to you are appreciated," says Product Unit Manager Colin Miller.
Development work on a core implementation of the .Net Micro Framework will continue both at Microsoft and in conjunction with the larger .Net Micro Framework community. So far, a core tech team of volunteers from inside and outside Microsoft has been identified, which will work in specific areas to refine and direct project proposals and get them developed and accepted into the core code base.
In addition to the features to be incorporated into the core codebase, there are other extensions and add-ons to the platform that people have made and will continue to make: some of which are free, while others are for sale.
The community web site includes a Showcase that allows the creators of all these extensions, as well as services, to be listed by their creators and found by users. If you have an extension, you can list it yourself on the site.
"As we found out with the Dare to Dream Different contest, held to see what cool ideas people could come up with a standard hardware reference board using the .Net Micro Framework, there are lots of great individual projects that people have created with the .Net Micro Framework. There is an Academic/Hobbyist discussion group where people can discuss their cool projects. There is a discussion group on the web site for proposing and discussing projects. Let's start the ideas rolling - what did you always want to see in the product? Who can you enlist to get it in?" Miller says.
The advantage of the .Net Micro Framework is that it allows Microsoft to offer a single programming model and toolchain from small peripheral devices to the server and on to the cloud. It is a platform that allows current .Net programmers to extend their reach into small devices.
"At PDC I spoke with a programmer who was very excited about the Micro Framework because his company had just turned down a project which they could do almost all of, but which included a requirement for a small, power efficient device. With the Micro Framework, he would not have to turn down that work again," Miller says.
by MichaelF on February 20, 2007 02:22pm
Just a quick note to let Port 25 readers know that yesterday Microsoft released Virtual PC 2007 as a free download.
With Virtual PC one can run multiple os's on a single physical machine which should be compelling for those developers needing to test and debug across multiple platforms. From community feedback I know a number of readers are generally interested in virtualization and I'd be curious to hear about your experiences after giving this a try.
Ben Armstrong's Blog is a good resource for information regarding Microsoft and Virtualization (Here is some info on Linux and Virtual PC from his blog).
On an unrelated note Mary Jo Foley wrote about Ian Murdock's visit to campus today and I have to say I'm really looking forward to his presentation. I mentioned it before but this is one of the things I really enjoy about this job: I get to meet all sorts of interesting and intelligent people (both in person and virtually) many of whom I'd never expect to meet as a result of working at Microsoft. Miguel de Icaza...Ian Murdock? Who would have guessed?
by Peter Galli on February 16, 2009 10:30am
Microsoft and Red Hat announced this morning that they have recently signed agreements to test and validate their server operating systems running on one another's hypervisors.
This is deeply significant as it means that customers will be able to confidently deploy Windows Server and Red Hat Enterprise Linux (RHEL), virtualized on Microsoft and Red Hat hypervisors, knowing that the solutions will be supported by both companies.
In short, Red Hat has joined Microsoft's Server Virtualization Validation Program, and Microsoft is now a Red Hat partner for virtualization interoperability and support.
Microsoft will also be listed in the Red Hat Hardware Certification List once the Red Hat certification process has been completed later this year.
Microsoft will also publish Linux Integration Components for RHEL when the testing and validation is complete and, according to Mike Neil's blog on this news, Red Hat is expected to provide Windows Hardware Quality Labs drivers for a variety of Windows Server versions.
"This means that those customers with valid support agreements will be able to run these validated configurations and receive joint technical support for running Windows Server on Red Hat Enterprise virtualization, and for running Red Hat Enterprise Linux on Windows Server 2008 Hyper-V or Hyper-V Server 2008," Neil says.
So, while Microsoft and Red Hat will continue to compete, customers have asked us to work together on technical support for server virtualization. These agreements respond to that request by giving them a new level of integration between Red Hat Enterprise Linux and Windows Server for their heterogeneous IT environments.
Customers with valid support agreements will now be able to call either Microsoft or Red Hat to have their issues resolved. If the first vendor contacted cannot resolve the issue, they will refer the problem to the other vendor for resolution; assuming the customer also has a valid support agreement with that vendor.
In the event that the second vendor cannot resolve the problem alone, Microsoft and Red Hat will work together to come to a resolution for the mutual customer.
What's more, once RHEL is validated as a guest on Windows Server 2008, Microsoft System Center Operations Manager 2007 R2 - which will include cross platform monitoring - will support RHEL server versions 4 and 5 in the second quarter of this year so that customers can manage the applications and operating systems in the guest VM.
This will allow customers to monitor end-to-end data center applications that are distributed across both Windows Server and RHEL, whether these servers are physical or virtual, thereby improving the visibility organizations have of these distributed applications, and reducing their operational costs by providing a single tool to manage these across operating systems.
Also, to be clear given that questions are going to be asked about how this compares to the existing relationship between Microsoft and Novell, this agreement with Red Hat is specific to joint technical support for our mutual customers using server virtualization. So, in that regard, think of it as one dimensional, whereas Microsoft's partnership with Novell is multi-dimensional.
For more on all this, you can read Mike Neil's blog, the press release here, and watch the public webcast.
by jcannon on July 21, 2006 03:51pm
Sam interviews Martin from Teamprise, a company which has developed a pretty interesting suite of client applications that can access Visual Studio Team Foundation Server from Macintosh, UNIX or Linux clients using Eclipse. The Teamprise implementation allows development teams to use the source control features as well as work item tracking from within the Eclipse IDE.
Related Links: - Check out the Teamprise site - Download the MP3 File Directly
Podcast Related Links: - Subscribe in the Port 25 Podcast Feed - Subscribe to Port 25 Podcasts in iTunes
by Peter Galli on July 07, 2009 06:15am
The number of projects hosted on CodePlex, Microsoft's Open Source project hosting site, breached the 10,000 mark on Saturday, July 4, 2009, which represents 160-million lines of code hosted across 10 Team Foundation Servers.
Congratulations go to SharpFitter, a Visual Studio 2008 C# add-in that dynamically loads plug-ins, for being the 10,000th project, and to the entire CodePlex team for this great achievement.
As of close-of-business, Monday July 6, the total number of projects stood at 10,037.
This milestone comes shortly after CodePlex celebrated its third anniversary on June 27, and follows the recent agreement under which CodePlex projects are automatically fed into Black Duck's open source KnowledgeBase repository, and searchable through Koders.com, a search engine for open source and other downloadable code.
"We hope to see this incredible rate of growth continue to bring more open source development to the Windows platform," said Sara Ford, the Program Manager for CodePlex, which had just under 3-million visits in June and close to 10-million page views.
The total number of registered users on CodePlex now stands at around 159,306
The most popular license used by the projects hosted on CodePlex is the Microsoft Public License, followed by the GPL v2 and with the MIT license rounding out the top three.
You can read more about all this on the CodePlex blog.
by Peter Galli on November 09, 2010 11:15am
In case you missed it last week, Microsoft has made the version 2.0 of the F# compiler and core libraries available under the Apache 2.0 license to help education and tool development.
The source code is published as part of the F# PowerPack CodePlex project, which now includes libraries, tools and the compiler/library source code drops. F# is a functional programming language.
The code was previously made available under a Microsoft shared-source license, and the binary versions have been available for downloading at no cost, either as a stand-alone package or as a plug-in to Visual Studio.
This release changes all that and the development team is now moving to a "code drop" model, where new versions of compiler library code will be released along with new releases of the language itself as part of the F# PowerPack.
In a blog post announcing all this, Don Syme, a principal researcher for Microsoft Research and the person who developed and maintains the code, says this release reinforces the commitment Microsoft is making to F#, including F# in Visual Studio.
"The real focus of F# is a quality experience of functional programming in Visual Studio, and that is what our team are driven to achieve and what we live for. To augment this, we are glad to be able to provide a compiler/library source drop, and are excited about the role this can play for education and tool development," he says.
by jonrosenberg on July 26, 2007 12:00pm
This is my first blog post on Port 25, and timely as my team and I are attending OSCON with the folks from Bill Hilf’s team.
I have some thoughts regarding the future of open source and how an organization matures along with the movement it helped to create. As Director of Source Programs at Microsoft I can attest to the value of keeping up with your own growth. We started on a journey, over three years ago, with the release of Windows Installer XML on SourceForge. At the time, the project required the approval of our Group Vice President and a herd of lawyers. The reactions of our colleagues were mixed, although as far as we know, none of our kids were beaten up at school as a result of what we were doing. Today, Microsoft has published 175 projects on CodePlex, we have written a pair of open licenses that are under a page in length and over the 500-project mark in adoption as others in the community have decided to use them. I also run a training class that teaches people around the company how to engage in open source projects and make them successful. The volume of projects over the past year has forced us to develop processes for approving and publishing projects that are easy to understand and administer.
As Microsoft’s engagement with open source grows, we have to move from being trailblazers to being road-builders. When you’re blazing a trail, organization, bureaucracy, and majority rule are a burden. In the beginning, a passionate group of people with strongly held beliefs and the will to persevere in the face of doubts and doubters is what it’s all about. When the trail is blazed and you’re keeping a four-lane road open, the challenges are very different. Traffic laws, driver’s licenses, public works, and law enforcement are all necessary and these things require the broad support of the people who use the road and live on the adjacent property. There’s nothing quite as effective in gaining this support as giving people a voice in how things are run. As we look forward to the next three years, we already see the needs of our constituents driving our priorities for licensing, infrastructure, and process. Although open source at Microsoft and the OSI are two different animals, I would submit to you that both are at a point in their maturity where their constituencies need to become more involved to maintain growth. While it’s important to focus on the needs of a growing community membership, it’s also important to remember why you started it in the first place. In Microsoft’s case, the reason is simple: Customers. IT professionals told us they wanted both platform choices and platform interoperability. Developers told us that they wanted more open collaboration and that the language of that collaboration is code. In response, Microsoft has reached interoperability agreements with several key vendors of open source software, CodePlex is now supporting 2,000 collaborative development projects, and the features of CodePlex itself are largely driven by the votes of the community.
Today, we reached another milestone with the decision to submit our open licenses to the OSI approval process, which, if the licenses are approved, should give the community additional confidence that the code we’re sharing is truly Open Source. I believe that the same voices that have been calling for Microsoft products to better interoperate with open source products would voice their approval should the Open Source Initiative itself open up to more of the IT industry. So what about the flip side of the OSI becoming a membership organization? Could they really be voted out of existence or rendered ineffective? It doesn’t seem likely to me. Participation in the OSI and adherence to OSI licensing guidelines and Open Source definitions is entirely voluntary. If it isn’t serving the best interests of the community, the community will go elsewhere. Anyone considering an effort to “vote the organization into the ground” would surely realize that such heavy handedness would be self-defeating. That’s not to say that a new membership structure wouldn’t lead to change, but I believe that these changes would have to be the result of vigorous consensus building and that’s probably not a bad thing.
I look forward to the submission process and welcome feedback from the community as we continue to grow together.