Follow Us on Twitter
by admin on May 18, 2006 10:30am
Mysteries of Cygwin...
-----Original Message----- From: steffen Sent: Saturday, April 29, 2006 8:32 AM To: Port 25 Subject: (Port25) : You guys should look into _____ Importance: High
cygwin and its mysteries to bring Linux software to Windows
I am using my wife's XP machine a lot after work and hope to compile kdissert (a mindmap tool) for cygwin. It works on coLinux for me already (which you should also discuss) but I felt like not booting something extra.
My effort ended before it really started .... a file aux.h could not be untared. Google told me that this was a special problem with Windows as aux.[ch] are reserved names. This is hillarious.
Pleeze ... fix this behaviour and ... give me kdissert.
This is a common problem when porting applications to a Win32 as AUX is one of the few reserved filenames. Other reserved files, regardless of extension, are AUX, LCOMn, CON, LPTn, NUL, and PRN. (a lower case n represents a digit, so LPT1 or LCOM2 would be reserved names) The only good workaround is to rename the reserved filenames.
I took a quick look at the kdissert source and found aux.h is only included in 22 files. I would suggest renaming aux.h to something else like kdissert_aux.h and either manually editing the source files or use sed (or enter your stream editor of choice) to make it a little less painful.
Great you say, buy how do you rename or extract a file from an archive if Windows won't let you create it in the first place? Tar just hangs when it gets to the aux.h file from kdissert-1.0.6pre3.bz2. The easiest solution would be to rename and modify the source on a separate Linux machine or VPC. However, if all you have access to is a Windows machine with Cygwin, you can still work around this problem.
Extract out the contents of aux.h to another file using tar.
$ tar jxvf kdissert-1.0.6pre3.tar.bz2 --to-stdout \ kdissert-1.0.6pre3/src/kdissert/datastruct/aux.h > kdissert_aux.h
Make sure to exclude aux.h when un-tarring so tar doesn't err out
$ tar -jxvf kdissert-1.0.6pre3.tar.bz2 --exclude \ kdissert-1.0.6pre3/src/kdissert/datastruct/aux.h
Copy kdissert_aux.h to the correct place
$ cp kdissert_aux.h kdissert1.0.6pre3/src/kdissert/datastruct/kdissert_aux.h
Modify the source files to use the newly named kdissert_aux.h.
This should at least get you started towards porting kdissert to Win32/Cygwin. You also might want to check out and keep an eye on KDE's native Windows development, since further development of KDE on Cygwin has stopped.
Also see: http://www.microsoft.com/technet/interopmigration/unix/sfu/portappc.mspx
KDE on Cygwin http://kde-cygwin.sourceforge.net/ http://mail.kde.org/pipermail/kde-cygwin/2005-September/003009.html
Development of native KDE on Windows http://sourceforge.net/forum/forum.php?forum_id=507276
by admin on June 21, 2006 01:02pm
Last week we announced the formation of an Interoperability Customer Executive Council. This is something many folks here at Microsoft have been working on and thanks go to a wide variety of people. I was involved from an OSS perspective and will continue to be active with both our internal groups and this customer council to help this mission succeed. As you can imagine, interoperability is a very wide and loosely defined word – it can mean many things to many people. Particularly when you produce a lot of software! Rather than try to ‘pick’ certain areas to focus on (believe it or not, Microsoft is constrained by time and money like all other businesses) we decided the most effective and beneficial path would be to have our customers drive this conversation. Thus was born this council, formed by customers of different sizes, business types, and geographic location, in order to provide guidance on interoperability issues that are most important to customers, including connectivity, application integration and data exchange.
Interoperability in a heterogeneous environment doesn’t happen accidentally, nor does having your code open make everything ‘just work’ together. It takes thoughtful design and architecture; it takes time, effort and engineering discipline. It requires working closely with partners, competitors and customers. It also requires mature understanding that all things don’t necessarily require interoperability, and frequently those things that do, sometimes require different types of interoperability. Point being: it takes focus, energy and commitment. This council is a great step and example of this commitment from Microsoft.
I’m very excited by this step forward, below are some further news stories about this:
Clients to advise Microsoft on software linking
Microsoft Customer Council to Focus on Interoperability
by Peter Galli on October 24, 2008 05:03pm
Getting external feedback is always a good thing, and something that Microsoft has been trying to do a lot more of, particularly with regard to its focus on openness and interoperability with various open source communities.
So it was with great interest that I read the trip report for the CIFS Conference and AD Plugfest, by Andrew Bartlett of the Samba Team, which is a candid assessment of the great challenges, successes and frustrations that are part and parcel of such complex arrangements.
But all that has been achieved so far is in large measure due to the passion, commitment, dedication and tenacity of the Samba team, and their openness to working towards a shared goal. This is truly a great example of how Microsoft and those in the open source community can work productively together.
While there has been some skepticism that Microsoft's focus on interoperability is simply for compliance sake, the reality is that the work the company has been doing with Samba and others are concrete examples of our commitment to openness and to working with open source communities.
So, the background to this is that, in late 2007, the Protocol Freedom Information Foundation, a non-profit organization created by the Software Freedom Law Center and Microsoft, reached an agreement that allows Samba to create, use and distribute implementation of all the protocols that allow workgroup servers to connect with Windows.
Microsoft has made protocol documentation for the Windows protocol programs available online and for download from MSDN, and developers are not required to execute a license or pay a royalty or other fee to have full access to this documentation.
As actions really do speak louder than words, it was gratifying to hear from Bartlett that Samba feels it has a "beachhead at Redmond, and a department committed to providing the Free Software community with answers or clarification on any reasonable interoperability question."
by hjanssen on January 25, 2007 11:50am
Whether deploying Windows in a Linux or UNIX environment, or vice versa, identity management can be a challenge. In order to explore this topic, Hank spent some time talking with Dustin Puryear author, consultant, featured speaker at TechX World and owner of Puryear Information Technology, LLC. Dustin specializes in solving integration challenges related to mixed environments and has penned a few books on the topic. Two of his books: Best Practices for Managing Linux and UNIX Servers and "Spam Fighting and Email Security for the 21st Century" are available as free e-books on his site.
In this podcast (Part 1 of 2) Hank and Dustin explore issues related to implementing Identity Management solutions and focus primarily on the importance of policies. While the act of deployment is by default important, without proper policies even the best solution can fall short. In the next podcast the conversation will focus on implementation strategies. If you have particular questions you'd like answered in the next podcast, please feel free to comment below.
by kishi on December 21, 2006 07:34pm
This blog continues what I started writing about w/ Thinking About HPC Infrastructure and what Frank wrote in about in Overloading Clusters.
After reading thru the previous blogs on HPC, someone might ask “What are some of the core components of HPC ?”. After all, once you’ve seen the outside of a Maserati or a Pantera DeTomaso, you’re not going to be satisfied just by ogling at it. Even after a test drive, the engineer in you will want to pop the hood and see what’s inside. Taking a similar approach let’s uncover some underlying HPC technologies by looking at any basic HPC setup. Once all the provisioning has been completed, the HPC system will be physically deployed with an OS and relevant drivers, utilities etc. Yet, before the actual HPC application can get installed across, there remains a critical step in the process, i.e. configuration of cluster and file system along with any tools and interfaces such as MPI (Message Passing Interface) etc. After peeling through the HPC application layer, its worthwhile to do a “deep-dive” into what really runs the HPC clusters. A broad category of these tools are:
If you’re trying to understand the “WHY” behind the existence of these tools and their importance, take a look at Cluster Management for example. Cluster configuration, installation and management can be difficult and requires intimate familiarity with the HPC hardware, OS, underlying architecture etc. Without specific tools that attend to and manage specific underlying HPC sub-components, HPC just won’t be what it is. So, it is worthwhile to understand the unique installation experience of the tools, such as the ones listed above to understand the complexity of HPC systems. Ready – let’s dive in to the installation and function of these tools:
1. SCALI: The SCALI management and MPI software packages provide deployment, monitoring and job scheduling services for a cluster. After you deploy this software, you will be able see all the compute nodes that may have been preconfigured or are configured on your system. Scali will enable you to monitor the systems and run jobs using the SCALI graphical interface. In order to license the SCALI software, you must utilize the scainstall command to produce a license request file. This file can then be sent to SCALI to receive a permanent key. For those that need some hand-holding through this, luckily SCALI provides very comprehensive documentation on their website. A large portion of the SCALI Manage User’s Guide is dedicated to pre-setup planning and configuration of the cluster and the network. The documentation provides detailed recommendations about how you can set up their Ethernet-based network environment and out-of-band management network. The documentation also provides a general overview about how to install and configure higher performance interconnects, including bonded Ethernet, Infiniband, Myrinet and SCI. The SCALI Manage interface provides simple tools to assist in configuring and testing DET, Infiniband, and Myrinet devices for use with the SCALI MPI implementation. The SCALI MPI software supports multiple Infiniband stacks including Mellanox, Topspin, Voltaire and Infinicon.
2. HP-MPI: HP-MPI is Hewlett-Packard’s Linux-based implementation of the Message Passing Interface (MPI). Many of the utilities distributed with HP-MPI are similar to other common MPI utilities such as MPICH - e.g. mpicc, mpirun, etc. In order to utilize the HP-MPI software, a license is required for each CPU core in the cluster. To obtain a license file you are required to obtain the MAC address from each node (typically eth0) and input that information into a form at licensing.hp.com. The resulting file can then be copied to the compute node. The HP-MPI software is non-functional until licensing files are generated for the nodes
3. CSM (Cluster Systems Management): The CSM software suite is designed to automate the deployment and management of cluster nodes. Nodes can be remotely installed with an operating system as well as the CSM software for later monitoring. The CSM software supports RedHat and Novell on multiple platforms. In order to obtain and install the CSM software one must register with IBM’s website and download the required RPMs. In order to configure CSM, it can remotely install the operating system and/or the CSM software on the compute nodes. Much like Platform ROCKS, CSM makes use of PXE functionality and RedHat’s kickstart or the autoyast software to remotely install the operating system. The CSM software provides multiple methods for defining the nodes that should be deployed and managed:
a. The first method involves creating a hostname mapping (hostmap) file, which is a colon-delimited file that defines a number of attributes of each node b. The second method also involves manually creating and editing a “node definition” (nodedef) file. This is the method suggested by the documentation for use with small clusters
a. The first method involves creating a hostname mapping (hostmap) file, which is a colon-delimited file that defines a number of attributes of each node b. The second method also involves manually creating and editing a “node definition” (nodedef) file. This is the method suggested by the documentation for use with small clusters
Proper remote power and remote console capabilities greatly ease the administration and deployment of the compute nodes, however according to the CSM FAQ remote power management is not absolutely required. All the compute nodes must be rebooted (remotely or manually). They are then PXE booted and installed with RHEL4 using the kickstart installation system.
4. Maui and Torque: Both Torque and Maui are free software which must be compiled from the source distribution on the head node. Maui is an open-source job scheduler for compute clusters. It supports a number of task management features not found in other parallel batch processing software including policy-based scheduling and prioritization of tasks. Torque is an open-source resource manager for managing compute nodes and scheduled jobs. It can integrate with Maui to provide additional features for scheduling and managing scheduled tasks. Installation of Torque can be done using the guidance available in the Torque 2.0 Admin Manual .
5. Platform Rocks: Platform Rocks is a cluster deployment software that facilitates the deployment of various software stacks (“rolls”) onto the compute nodes. The software is capable of deploying the base operating system and utilities required for cluster administration, management and scheduling. The software can also manage configuration and updates to ensure consistency throughout the cluster. Platform Rocks is a suite of utilities that are packaged together as separate installable rolls. One of the main goals of the software is to allow for easy installation and integration of third-party rolls and applications. One unique aspect to the Platform Rocks installation approach is that the software installs an operating system on the head node, and also installs all the required rolls at the same time. The software can also automatically set up the subsystem required to install an operating system and other packages on the compute nodes (such as management agents, etc).
That about does it for a quick “deep-dive”. Let me insert a gentle reminder that these are not the only cluster or resource management technologies out there in the HPC space but rather the ones most prevalent. If you have additional tools that you have worked with, we’d like to hear from you and thank you for tuning in to Port 25. HAPPY HOLIDAYS!
by jcannon on August 24, 2007 03:58pm
Heading into a long (and quiet) weekend for many folks in the US, I thought I would highlight some useful open source and shared source projects on Codeplex, particularly after reading eWeek's gallery of the Top 25 Most Active Open Source Projects on Codeplex. There's some practical tools in the article - including an open source blogging engine, SQL sample applications and a cool mapping application. What many folks may not know is that there are over 2,000 projects available on Codeplex. Check out, among others:
IronRuby also released last month (it lives on RubyForge) - John Lam has more details here. If you're interested in learning more, I would also recommend checking out our Sourceforge landing page that highlights open source projects and programs at Microsoft. While you're on Sourceforge, we have some interesting projects over there as well - including the WiX Toolset and the OpenXML/ODF Translator.
Enjoy the weekend (and the downloads). -Jamie
by admin on March 31, 2006 06:59am
Who would have guessed?
A sincere thank you for all the excitement and feedback since we launched Port25 last week. We’ve had a tremendous response and the conversation has been lively to say the least.
There have been hundreds of blog posts and hundreds of emails sent – both through the feedback aliases and many that you have sent directly to me. There have been rants, demands, questions, encouragement, suspicion, affirmation, ideas, pontifications and guidance. There are many of you who gave us technical advice (such as video formats) that was valuable and we’re making those changes – thank you for this input. Many of you have asked about the signal-to-noise ratio, and some of you have commented on this to me both on the blog and privately. I was pretty adamant about keeping the blog post system wide open to start, and introduce a registration system if you wanted us to. We’ve heard this loud and clear, and we’re looking into this now.
Let me clarify some things and hopefully set some expectations. Our goal with Port 25 is to have a community discussion on people working with OSS and Microsoft software. Many of you who know me, know that I’m a no BS type of guy and I’ve spent many years answering the 3am pager calls when problems arise in the data center. At 3am, there’s not a lot of interest in technology dogma and rhetoric. I’m now officially a PHB (hair withstanding) and although I don’t carry the pager, I haven’t lost this core principle. I understand there is going to be philosophy and zealotry, and that’s why I titled this ‘Who would have guessed?', but the work our team does in the OSS research lab is heavily oriented around trying to understand and help real customer interoperability. So this is the type of discussion that you'll hear from us, more than trying to answer why Microsoft doesn't give away all its software for free, etc.
This does not mean we won’t discuss the issues, we will, but I wanted to explain our intent and hopefully the community that grows here will be able to focus on productive and progressive technical discussions. I’m sure there are options out there on the Web for those who want to bash Microsoft, or dream up yet another conspiracy theory, but our goal here is to evolve and to hopefully provide information that makes it easier for people using OSS and Microsoft software in the real world.
So what’s next? Sam and Kishi have a variety of topics on deck for discussion and we’re going to be diving into more analysis as well as new profiles on other folks we find interesting at Microsoft. I’ll be blogging soon on a few of my experiences in open source software over the past twelve years, particularly looking to start conversations with you about your approaches and thoughts on these subjects. Also, I’ll talk about conversations I have with customers around the world on what interoperability issues they are interested in discussing. I’m thinking my next blog post should be about the talk I gave at Linux World Boston last week, and some of the ideas on interoperability I shared there. If you'd like to keep track of when we're adding new content to the site, please subscribe to our RSS feed on the home page.
Again, a sincere thank you and I look forward to seeing this community grow. And for the curious, my Russian is very, very rusty.
by jcannon on October 02, 2006 06:06pm
Last week, I was fortunate to take a day & visit the floor of Interop New York. The Interop conference celebrated its 20th anniversary this year, underscoring the persistence & complexity of interoperability as an industry issue. This year’s mission was no different…to discuss achieving the ideal state of all technology talking to itself, and to others.
OK, so here’s the clever metaphor I came up with on my walk over to the Javitz Center (I live in NYC – anyone around….let me know). Interoperability is a loaded term since it can mean so many different things to different folks …but why? Because interop is really more about many means toward an end. In fact, I would suggest the goal of most interoperability efforts is to enhance the performance of a total system through improved communication and accessibility of the various subsystems – be that protocols, applications, schemas or operating systems (the means). It’s kind of like…..a city! The below pick was snapped on my way over to the show....the total system of a city only works when it’s subcomponents work together successfully – be they water, electric, gas, subway, building specs & zoning, urban development & layout, garbage, sewer, security, transit. All of those systems must talk to themselves and to each other to provide a quality of service to the citizens it serves. I skillfully Photoshop’d the below to illustrate my point….if you live or have visited NYC, you know that it’s amazing that the system works so well, every single day. We grumble that interop in IT is hard to achieve with legacy systems 5, 10, 20 years old. Imagine connecting systems dating back to Gangs of New York. Absolutely nuts.
Pretty Complex Stuff (PCS)
The Show More than anything, it was fascinating to hear how customers & vendors are defining interoperability. The pervasive definition was squarely on technology and the increasing role of (here comes the buzz-train) Web 2.0 in connecting data, information and systems. This was reflected in the keynotes, such as Andrew McAfee and SocialText CEO, Ross Mayfield’s presentation. I’ll start by breaking that notion down into a couple more discrete ideas:
The Browser as Epoxy This was the most interesting theme of all ~ across both IT and Developer tracks, interoperability was a problem best addressed by the browser. I phrase it this way because at its most basic level, the browser, through robust XML frameworks and rich presentation layers, is what is bringing disparate data and systems together in a seamless way for enterprise end-users. That quality of adhesiveness is being enabled by technology like AJAX and DHTML – not surprisingly - and subsequently one arrives at ‘Rich Internet Applications.’ What was interesting to me was that interoperability solutions weren’t being implemented over wires to connect servers. Instead, silo’d data was being fed to powerful client machines for cohesion. In essence, the browser was the epoxy for the data & systems throughout the network. Cool stuff indeed.
The (Dubious) Optimism of Enterprise User-Collaboration The keynote on Thursday was dedicated to examining how social software conventions (Wikis, Tagging, RSS) can & should be extended into the enterprise environment. In contrast to the process orientation of most enterprise-level applications, Andrew McAfee, Harvard Business School, pointed out that there was much to gain through large scale, unstructured data sharing in an enterprise & the subsequent emergent analysis that could be leveraged as a learning system. What makes all this possible? The very cool technologies of Web 2.0: Deep Linking, Search, Tagging, RSS, Easy Authoring.
Why dubious? Only the largest sites on the Internet are proof points of success: Digg, Delicious, Technorati….we have no idea how to bridge legacy systems to a new generation of tools in an enterprise, nor the required tipping point at which critical mass is reached and emergent patterns actually become meaningful. (How useful is this really for a small/medium/large organization?) And control. Users tend to be untrusted, unsupervised, distributed, silo’d and working in environments under great rates of change – how does a manager leverage a system like this?
Can I Get a Customer? There were a ton of great demonstrations and eye candy, but I was hard pressed to find a customer talking about a successful interop implementation using most or all of the above technologies. This indicated – to me- we’re still in the early hype cycle of these technologies and lack a customer base and the watered-down expectations that accompany real-world implementation.
Adult Visual Aides; Server Racks & The Show Entrance
It was absolutely worth the day – although I would have liked to have seen more customers on-site. Can’t blame anybody given it was the last day of the conference though. Did you attend the show, or have any questions about it? Share your thoughts; I would love to hear other perspectives on the themes that were presented.
by jcannon on July 05, 2007 03:37pm
We're nineteen days away from OSCON, and very excited about participating at this year's event. Microsoft is a Diamond sponsor of OSCON, and we have a number of interesting open source and Linux interoperability sessions and keynotes planned throughout the show. For those who can attend, we'll hope you join us for some of the highlights below:
If you can't attend in-person, stay tuned to Port 25 for coverage of OSCON, the sessions above - and more... Jamie.
by MichaelF on April 13, 2007 09:21pm
In this interview Sam sits down with a veteran of the gaming industry, Star Trek Deck Plan Expert and Development Manager for the XNA Community Game Platform: Frank Savage. Sam and Frank discuss his background and how he ended up at Microsoft as well as the finer points of XNA. Included are demos of both games that have been developed using XNA as well as how XNA eases much of the work associated with game development allowing users to focus on the game itself.
You can download XNA Express Studio and the XNA Framework here.
by MichaelF on September 20, 2006 06:54pm
Sam interviews Rod Smith Author of the O'Reilly Book: Linux in a Windows World, a required read for OSSL staff. In this interview Sam and Rod discuss the impetus for writing the book and what readers can expect as well as Rod's background and challenges faced in penning this useful resource.
Along with the interview, O'Reilly Publications has given us permission to repost an excerpt from the book that provides useful information on configuring a Samba Server.
Big Thanks to O'Reilly for allowing us to make this portion of the book publicly available through Port 25!
Click the cover for the excerpt: (Download PDF)
(cover link: http://blogs.technet.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-86-46-PDFs/2766.excerpt_5F00_linux_5F00_in_5F00_a_5F00_windows_5F00_world.pdf)
Title: Linux in a Windows World First Edition: February 2005 ISBN: 0-596-00758-2 Pages: 494 URL: http://www.oreilly.com/catalog/linuxwinworld/index.html
Copyright © 2006 O'Reilly Media, Inc. All rights reserved. Used with permission.
by anandeep on February 15, 2007 08:18pm
I am writing this from the Big Apple. The Linuxworld Open Solutions Summit is in New York and running from February 13th through February 15th. I questioned the wisdom of holding a conference in February on the East coast rather than the milder shores of the West Coast - since I arrived the middle of a snowstorm after a day's delay caused by flight cancellations. But once I got here the bright lights on Broadway dissipated all doubts!
The Open Solutions Summit is a much smaller conference than Linuxworld conference (which I covered in a previous blog). But it has many more IT professionals who show up than vendors. This makes for more OSCON-like ambience, except that it is IT focused rather than developer focus.
Microsoft held an evening mixer in the conference hotel (the Marriot Marquis) in the cool rotating restaurant on top of the building. The NYC skyline was spectacular, but what I really enjoyed were the conversations with the people. I got to chat with people such as Jeremy White of CodeWeavers. When he asked me if I knew what WINE was, I answered "Wine is Not an Emulator?". I think Jeremy appreciated me knowing that!
Lincoln Durey from EmperorLinux was there and told me about the new tablet laptop that used Jarnal based handwriting recognition. He did say it wasn't as good as the Windows Tablet PC handwriting recognition! I think OneNote is the killer app for the tablet. Is there a open source application like OneNote out there? What was cool about the tablet was that it has a WACOM film on top of a regular LCD that talks to a serial port. He even had a toughbook with the same stuff on it.
It was a pleasure to meet Tony Luck who is a Principal Engineer at the Open Source Technology Center at Intel. I think we could identify with each other because we worked in similar organizations (OSTC / OSSL). He is a kernel maintainer - which means he is royalty as far as Open Source goes. He works on making sure that Intel chip features work with and are fully utilized by Linux in his day job. Way cool - and he's a nice guy to boot.
Another person I met was Gianluca Brigandi who is with a small company called Novascope in Argentina. His company is building out a product based on Java Open Single SignOn (JOSSO) standard besides providing open source code for JOSSO. He is very concerned with interoperability and was keen to hear about Microsoft's open source efforts. I pointed him to http://www.codeplex.com/ and probably will be interacting with him on interoperability further. We spoke about everything from APIs for developers to use to interact with identity standards to open source licenses and what they mean for companies like his.
What brought home the power of conversations to me was a conversation with Rob Donath from SpikeSource. He approached us with an interoperability and packaging questions about Windows and open source software in a virtualization environment. And he said that he wouldnt have known who to talk to if we hadn't been reaching out to the Open Source community through events like this and Port 25.
I love my job! :-)
by MichaelF on November 13, 2006 10:39am
While we were in San Jose for Zendcon we had the opportunity to spend some time with Jacob Taylor, CTO of Sugar CRM. As you may remember, Microsoft and Sugar announced a technical collaboration in February focused on improving the experience of deploying and running Sugar on Windows. Sugar also decided to release the a new Sugar Suite distribution under the Ms-CL (Microsoft Community License) as part of the Shared Source Program.
In this interview Sam and Jacob discuss the path that led Jacob to being co-founder and CTO of Sugar CRM as well as what has been happening at Sugar since February. We also get Jacob's thoughts on the commercial open-source model that Sugar pioneered and the collaboration announced by Microsoft and Zend which has an impact on Sugar and its customers.
by hjanssen on October 29, 2008 12:38pm
Wednesday - Day two for my IPC in Mainz conference, which is a developer orientated PHP conference.
Very well attended. The most negative thing I can say about this conference is that for some unknown (but brilliant beyond my level of comprehension) reason the venue is a 30 minute cab drive from the Speaker hotel. And the shuttle provided in the morning leaves every 30 minutes, has 5 seats and has a line of 20+ people for it in the morning.
And yes, there are hotels closer by.
It is pretty cool to see how things changed in the last few years; people do not stop/point and stare anymore when they see Microsoft people walking around and actively engaging. People are happy to see us.
There are a lot sessions on a variety of topics, but I get the most out of talking to people outside of these sessions. I am starting to lose track with everybody I have talked to.
Anyhoot, I had a good conversation with Brian Akers. For those who do not know Brian, he is one of the people behind Drizzle. He gave a keynote yesterday that was extremely well attended and talked about the state of Drizzle, which is starting to become a really interesting Database.
One session that was pretty unique was Pierre Joye and Garrett Serack doing a joint session on how to build PHP on Windows. This used to require the sacrifice of your favorite item, standing on your head, facing the North and chanting to RA to get it to build.
The work we have done with the PHP community to make PHP on Windows the best possible platform in the past few months has been greatly improved and accelerated. All old libraries have been updated to their latest versions, something that had not been done in over 10 years for some of them. More importantly, these libraries are now the same versions (and thus have the same behavior) as their Linux counterpart.
Additionally the build system used was VC6, which means Visual Studio 1998!!. The build system is now VC9 or Visual studio 2008. And, depending on the speed of your machine, it builds in a few minutes. And viola, a brand new, shiny, hot from the oven, newly minted PHP.
Now we have a great place to start from, a build for Windows that we have all the code for, a build with a compiler that comes out of this century. That will leave us ready to do the next steps, optimizing PHP on Windows. And that is what we will be working on for the foreseeable future. If you can/want to please participate.
These changes are incorporated into the latest build starting with PHP 5.3. You can download this here.
If you want to check out in person what we did, and how you now can build PHP for Windows, check out this link.
BTW, Pierre and Garrett both have the misfortune to report to me at the OSTC. And yes, there are questions about their sanity
A few more days, and then back home. Where my wife, kid and dog claim they are looking forward to having me back again after 2 weeks. Off to get some rest.......
by MichaelF on September 05, 2006 06:01pm
On a recent visit to Thailand I had the opportunity to meet a variety of customers, open source community members, government officials and new Microsoft people. A great part of my job is the opportunity to understand the state of the software industry in different countries around the world – both in developed and emerging countries. It’s fascinating to see the patterns of similarities and often surprising to learn about the myriad of country-specific characteristics that influence the evolution and growth of a software industry. My visit to Thailand was remarkable in both of these areas. I visited at a time where the government was under some unstable conditions – although the environment is amazingly under control and calm. The state of the Thai software industry is relatively segmented, with some areas quite advanced and some under significant early development.
The role of Microsoft in a country like Thailand is somewhat different than in many large developed countries. In countries like Thailand, Microsoft participates heavily in the growth and health of the software industry. Certainly we do this in large countries as well, but it’s much more direct and hands on in countries like Thailand. Naysayer’s will claim this is so we can just ‘sell more’ to new audiences. Of course we care about software sales (we’re a commercial software business!) but in these environments we prioritize the condition of the software ecosystem as it’s the basis for any near or future business. For example, the Microsoft general manager for a country like Thailand (Andrew in the photo below – far right) will spend a good portion of their time working on country-wide initiatives for improved software education, or in cross-vendor forums focusing on improved software security, or in helping the government plan for software infrastructures for future natural disasters (tsunamis, for instance). What I find interesting, is that many people, particularly in the U.S., don’t often see this side of Microsoft and it is a very important part of our role as a business, community and industry leader to help the entire software ecosystem grow and prosper.
Related to this, one of my trip highlights was a dinner in Bangkok with some of the leading science and technology thinkers in the Thai government around the future of IT in Thailand*. Below is a photo of (left to right): Dr. Chadamas Tuwasetakul, Assistant to Director, National Electronic and Computer Technology Center (NECTEC); Dr. Pairash Thajchayapong, Senior Advisor to National Science and Technology Development Agency; me; Dr Thaweesak Koanantakool, Director, NECTEC; Andrew McBean, General Manager, Microsoft Thailand.
We had a great discussion about software for children in K-12 classrooms, the benefits and challenges to delivering country-wide computing infrastructure for environments that have numerous IT challenges (such as very few technical support staff). We also talked about commercial and free software, Microsoft’s position on OSS, standards, interoperability and our future product line – particularly Windows Vista and Office 2007. It was a great and opinionated discussion and I learned much from Dr. Tuwasetakul, Dr. Thajchayapong, and Dr. Koanantakool, their insight was highly valuable.
The software industry is going through tremendous growth in Thailand. I feel there is much to be learned from watching these next frontier software ecosystems, to see how they develop their industries in this new era of software economies, how they learn from other economies, countries and trends and most importantly how they create a software legacy that is both prosperous and uniquely Thai.
O’Reilly blogger Allison had an interesting term, technodiversity that I think is a great way of thinking about ecosystem evolution. I believe strongly that intellectual invention, innovation, and both pragmatic and expressible interoperability are keys to achieving this type of technodiversity. In an upcoming blog entry I hope to dive more into this area of pragmatic and expressible interoperability to describe why this often fuzzy term ‘interoperability’ is crucial to growing ecosystems. Warning – expect unusual correlations to trains, newts and other seemingly random but (to be illustrated) relevant examples to the subject.
* To be fair, earlier in the day I spent time with James Clark, long time OSS/XML developer and now part of SIPA (Software Industry Promotion Agency) and the leading OSS promoter in Thailand, and I would include James as one the leading thinkers on software in Thailand as well.