Follow Us on Twitter
by jcannon on August 01, 2006 02:51pm
Today, the IT departments offering and managing various IT Services might find themselves in what I would call a “pressure-cooker”. They are faced with a multitude of tasks and added pressure to maintain daily operations while driving efficacy, managing the growing complexity of Service Offerings and most importantly, doing so while keeping pace with the industry best practices. This has been one of the most explosive areas of growth and re-examination for the past few years. Back in my Ops days, I trained under ITIL i.e. IT Infrastructure Library and MOF i.e. Microsoft Operations Fundamentals to get a first hand look at some of the best Service Management practices in the industry. No matter how good I thought our Service Management practices might have been, I could not help but to think in terms of the maturity level of the Services that can be achieved by applying these principles. When you get down to it, you realize that the heart and soul of effective Service Management lies in how mature the offering and support model is. I have learnt a lot from the ITIL Service Management Essentials course, which I attribute to research and practices that have gone into developing these models. I’d like to share w/ you what made sense to me:
I am very eager to hear back from those of you that are an integral part of the Service Management Lifecycle. Please share your experiences, challenges and learning with us.
Kindest Regards and have a great week ahead!
by kishi on December 21, 2006 07:34pm
This blog continues what I started writing about w/ Thinking About HPC Infrastructure and what Frank wrote in about in Overloading Clusters.
After reading thru the previous blogs on HPC, someone might ask “What are some of the core components of HPC ?”. After all, once you’ve seen the outside of a Maserati or a Pantera DeTomaso, you’re not going to be satisfied just by ogling at it. Even after a test drive, the engineer in you will want to pop the hood and see what’s inside. Taking a similar approach let’s uncover some underlying HPC technologies by looking at any basic HPC setup. Once all the provisioning has been completed, the HPC system will be physically deployed with an OS and relevant drivers, utilities etc. Yet, before the actual HPC application can get installed across, there remains a critical step in the process, i.e. configuration of cluster and file system along with any tools and interfaces such as MPI (Message Passing Interface) etc. After peeling through the HPC application layer, its worthwhile to do a “deep-dive” into what really runs the HPC clusters. A broad category of these tools are:
If you’re trying to understand the “WHY” behind the existence of these tools and their importance, take a look at Cluster Management for example. Cluster configuration, installation and management can be difficult and requires intimate familiarity with the HPC hardware, OS, underlying architecture etc. Without specific tools that attend to and manage specific underlying HPC sub-components, HPC just won’t be what it is. So, it is worthwhile to understand the unique installation experience of the tools, such as the ones listed above to understand the complexity of HPC systems. Ready – let’s dive in to the installation and function of these tools:
1. SCALI: The SCALI management and MPI software packages provide deployment, monitoring and job scheduling services for a cluster. After you deploy this software, you will be able see all the compute nodes that may have been preconfigured or are configured on your system. Scali will enable you to monitor the systems and run jobs using the SCALI graphical interface. In order to license the SCALI software, you must utilize the scainstall command to produce a license request file. This file can then be sent to SCALI to receive a permanent key. For those that need some hand-holding through this, luckily SCALI provides very comprehensive documentation on their website. A large portion of the SCALI Manage User’s Guide is dedicated to pre-setup planning and configuration of the cluster and the network. The documentation provides detailed recommendations about how you can set up their Ethernet-based network environment and out-of-band management network. The documentation also provides a general overview about how to install and configure higher performance interconnects, including bonded Ethernet, Infiniband, Myrinet and SCI. The SCALI Manage interface provides simple tools to assist in configuring and testing DET, Infiniband, and Myrinet devices for use with the SCALI MPI implementation. The SCALI MPI software supports multiple Infiniband stacks including Mellanox, Topspin, Voltaire and Infinicon.
2. HP-MPI: HP-MPI is Hewlett-Packard’s Linux-based implementation of the Message Passing Interface (MPI). Many of the utilities distributed with HP-MPI are similar to other common MPI utilities such as MPICH - e.g. mpicc, mpirun, etc. In order to utilize the HP-MPI software, a license is required for each CPU core in the cluster. To obtain a license file you are required to obtain the MAC address from each node (typically eth0) and input that information into a form at licensing.hp.com. The resulting file can then be copied to the compute node. The HP-MPI software is non-functional until licensing files are generated for the nodes
3. CSM (Cluster Systems Management): The CSM software suite is designed to automate the deployment and management of cluster nodes. Nodes can be remotely installed with an operating system as well as the CSM software for later monitoring. The CSM software supports RedHat and Novell on multiple platforms. In order to obtain and install the CSM software one must register with IBM’s website and download the required RPMs. In order to configure CSM, it can remotely install the operating system and/or the CSM software on the compute nodes. Much like Platform ROCKS, CSM makes use of PXE functionality and RedHat’s kickstart or the autoyast software to remotely install the operating system. The CSM software provides multiple methods for defining the nodes that should be deployed and managed:
a. The first method involves creating a hostname mapping (hostmap) file, which is a colon-delimited file that defines a number of attributes of each node b. The second method also involves manually creating and editing a “node definition” (nodedef) file. This is the method suggested by the documentation for use with small clusters
a. The first method involves creating a hostname mapping (hostmap) file, which is a colon-delimited file that defines a number of attributes of each node b. The second method also involves manually creating and editing a “node definition” (nodedef) file. This is the method suggested by the documentation for use with small clusters
Proper remote power and remote console capabilities greatly ease the administration and deployment of the compute nodes, however according to the CSM FAQ remote power management is not absolutely required. All the compute nodes must be rebooted (remotely or manually). They are then PXE booted and installed with RHEL4 using the kickstart installation system.
4. Maui and Torque: Both Torque and Maui are free software which must be compiled from the source distribution on the head node. Maui is an open-source job scheduler for compute clusters. It supports a number of task management features not found in other parallel batch processing software including policy-based scheduling and prioritization of tasks. Torque is an open-source resource manager for managing compute nodes and scheduled jobs. It can integrate with Maui to provide additional features for scheduling and managing scheduled tasks. Installation of Torque can be done using the guidance available in the Torque 2.0 Admin Manual .
5. Platform Rocks: Platform Rocks is a cluster deployment software that facilitates the deployment of various software stacks (“rolls”) onto the compute nodes. The software is capable of deploying the base operating system and utilities required for cluster administration, management and scheduling. The software can also manage configuration and updates to ensure consistency throughout the cluster. Platform Rocks is a suite of utilities that are packaged together as separate installable rolls. One of the main goals of the software is to allow for easy installation and integration of third-party rolls and applications. One unique aspect to the Platform Rocks installation approach is that the software installs an operating system on the head node, and also installs all the required rolls at the same time. The software can also automatically set up the subsystem required to install an operating system and other packages on the compute nodes (such as management agents, etc).
That about does it for a quick “deep-dive”. Let me insert a gentle reminder that these are not the only cluster or resource management technologies out there in the HPC space but rather the ones most prevalent. If you have additional tools that you have worked with, we’d like to hear from you and thank you for tuning in to Port 25. HAPPY HOLIDAYS!
by jcannon on August 24, 2007 03:58pm
Heading into a long (and quiet) weekend for many folks in the US, I thought I would highlight some useful open source and shared source projects on Codeplex, particularly after reading eWeek's gallery of the Top 25 Most Active Open Source Projects on Codeplex. There's some practical tools in the article - including an open source blogging engine, SQL sample applications and a cool mapping application. What many folks may not know is that there are over 2,000 projects available on Codeplex. Check out, among others:
IronRuby also released last month (it lives on RubyForge) - John Lam has more details here. If you're interested in learning more, I would also recommend checking out our Sourceforge landing page that highlights open source projects and programs at Microsoft. While you're on Sourceforge, we have some interesting projects over there as well - including the WiX Toolset and the OpenXML/ODF Translator.
Enjoy the weekend (and the downloads). -Jamie
by admin on June 21, 2006 01:02pm
Last week we announced the formation of an Interoperability Customer Executive Council. This is something many folks here at Microsoft have been working on and thanks go to a wide variety of people. I was involved from an OSS perspective and will continue to be active with both our internal groups and this customer council to help this mission succeed. As you can imagine, interoperability is a very wide and loosely defined word – it can mean many things to many people. Particularly when you produce a lot of software! Rather than try to ‘pick’ certain areas to focus on (believe it or not, Microsoft is constrained by time and money like all other businesses) we decided the most effective and beneficial path would be to have our customers drive this conversation. Thus was born this council, formed by customers of different sizes, business types, and geographic location, in order to provide guidance on interoperability issues that are most important to customers, including connectivity, application integration and data exchange.
Interoperability in a heterogeneous environment doesn’t happen accidentally, nor does having your code open make everything ‘just work’ together. It takes thoughtful design and architecture; it takes time, effort and engineering discipline. It requires working closely with partners, competitors and customers. It also requires mature understanding that all things don’t necessarily require interoperability, and frequently those things that do, sometimes require different types of interoperability. Point being: it takes focus, energy and commitment. This council is a great step and example of this commitment from Microsoft.
I’m very excited by this step forward, below are some further news stories about this:
Clients to advise Microsoft on software linking
Microsoft Customer Council to Focus on Interoperability
by admin on April 25, 2006 07:26pm
Reading through some of the first posts to Port25, there has been some great ideas and posts. There have also been some really interesting conspiracy theories, so let me clarify some things we’ve seen in the blog posts, emails, and even other Web sites and articles:
Port25 is not an attempt to subterfuge the OSS community Microsoft does not have people posting to Port25 trying to make the OSS community ‘look dumb’ Microsoft moderators do not remove all the controversial posts Port25 is not a marketing or PR stunt Port25 is not related to the legal stuff in the EU The guys in the OSS lab are not soulless sell outs or villainous rascals Port25 does not have a hidden agenda It really does not hurt our feelings when people try to get personal or make unfounded and derogatory claims – this is part of the reality of our jobs I don’t speak Russian – but I’m thinking about learning since we launched Port25
You can disagree with the above, but this is the truth. We are working hard to make Port25 more of a reciprocal community, but it requires everyone to partake in a two way conversation. I understand there is a lot we do today and need to do in the future, but it requires smart and mature people and companies to work collaboratively. I’m sure this will ruffle some feathers, but it’s the only way it will really work. And after you put down the politics and rhetoric, I think you’ll see the same – it’s no different than any other relationship. Working together can make it happen.
by MichaelF on September 14, 2006 03:15pm
We've received numerous inquiries regarding the Windows Vista Beta program so I wanted to make sure anyone subscribed to our RSS feed who also happens to be interested sees this before the program fills up.
In order to download the bits (3.0GB's for the 32-bit version to be exact) you will need a Windows Live account.
by admin on May 18, 2006 10:30am
Mysteries of Cygwin...
-----Original Message----- From: steffen Sent: Saturday, April 29, 2006 8:32 AM To: Port 25 Subject: (Port25) : You guys should look into _____ Importance: High
cygwin and its mysteries to bring Linux software to Windows
I am using my wife's XP machine a lot after work and hope to compile kdissert (a mindmap tool) for cygwin. It works on coLinux for me already (which you should also discuss) but I felt like not booting something extra.
My effort ended before it really started .... a file aux.h could not be untared. Google told me that this was a special problem with Windows as aux.[ch] are reserved names. This is hillarious.
Pleeze ... fix this behaviour and ... give me kdissert.
This is a common problem when porting applications to a Win32 as AUX is one of the few reserved filenames. Other reserved files, regardless of extension, are AUX, LCOMn, CON, LPTn, NUL, and PRN. (a lower case n represents a digit, so LPT1 or LCOM2 would be reserved names) The only good workaround is to rename the reserved filenames.
I took a quick look at the kdissert source and found aux.h is only included in 22 files. I would suggest renaming aux.h to something else like kdissert_aux.h and either manually editing the source files or use sed (or enter your stream editor of choice) to make it a little less painful.
Great you say, buy how do you rename or extract a file from an archive if Windows won't let you create it in the first place? Tar just hangs when it gets to the aux.h file from kdissert-1.0.6pre3.bz2. The easiest solution would be to rename and modify the source on a separate Linux machine or VPC. However, if all you have access to is a Windows machine with Cygwin, you can still work around this problem.
Extract out the contents of aux.h to another file using tar.
$ tar jxvf kdissert-1.0.6pre3.tar.bz2 --to-stdout \ kdissert-1.0.6pre3/src/kdissert/datastruct/aux.h > kdissert_aux.h
Make sure to exclude aux.h when un-tarring so tar doesn't err out
$ tar -jxvf kdissert-1.0.6pre3.tar.bz2 --exclude \ kdissert-1.0.6pre3/src/kdissert/datastruct/aux.h
Copy kdissert_aux.h to the correct place
$ cp kdissert_aux.h kdissert1.0.6pre3/src/kdissert/datastruct/kdissert_aux.h
Modify the source files to use the newly named kdissert_aux.h.
This should at least get you started towards porting kdissert to Win32/Cygwin. You also might want to check out and keep an eye on KDE's native Windows development, since further development of KDE on Cygwin has stopped.
Also see: http://www.microsoft.com/technet/interopmigration/unix/sfu/portappc.mspx
KDE on Cygwin http://kde-cygwin.sourceforge.net/ http://mail.kde.org/pipermail/kde-cygwin/2005-September/003009.html
Development of native KDE on Windows http://sourceforge.net/forum/forum.php?forum_id=507276
by jcannon on October 02, 2006 06:06pm
Last week, I was fortunate to take a day & visit the floor of Interop New York. The Interop conference celebrated its 20th anniversary this year, underscoring the persistence & complexity of interoperability as an industry issue. This year’s mission was no different…to discuss achieving the ideal state of all technology talking to itself, and to others.
OK, so here’s the clever metaphor I came up with on my walk over to the Javitz Center (I live in NYC – anyone around….let me know). Interoperability is a loaded term since it can mean so many different things to different folks …but why? Because interop is really more about many means toward an end. In fact, I would suggest the goal of most interoperability efforts is to enhance the performance of a total system through improved communication and accessibility of the various subsystems – be that protocols, applications, schemas or operating systems (the means). It’s kind of like…..a city! The below pick was snapped on my way over to the show....the total system of a city only works when it’s subcomponents work together successfully – be they water, electric, gas, subway, building specs & zoning, urban development & layout, garbage, sewer, security, transit. All of those systems must talk to themselves and to each other to provide a quality of service to the citizens it serves. I skillfully Photoshop’d the below to illustrate my point….if you live or have visited NYC, you know that it’s amazing that the system works so well, every single day. We grumble that interop in IT is hard to achieve with legacy systems 5, 10, 20 years old. Imagine connecting systems dating back to Gangs of New York. Absolutely nuts.
Pretty Complex Stuff (PCS)
The Show More than anything, it was fascinating to hear how customers & vendors are defining interoperability. The pervasive definition was squarely on technology and the increasing role of (here comes the buzz-train) Web 2.0 in connecting data, information and systems. This was reflected in the keynotes, such as Andrew McAfee and SocialText CEO, Ross Mayfield’s presentation. I’ll start by breaking that notion down into a couple more discrete ideas:
The Browser as Epoxy This was the most interesting theme of all ~ across both IT and Developer tracks, interoperability was a problem best addressed by the browser. I phrase it this way because at its most basic level, the browser, through robust XML frameworks and rich presentation layers, is what is bringing disparate data and systems together in a seamless way for enterprise end-users. That quality of adhesiveness is being enabled by technology like AJAX and DHTML – not surprisingly - and subsequently one arrives at ‘Rich Internet Applications.’ What was interesting to me was that interoperability solutions weren’t being implemented over wires to connect servers. Instead, silo’d data was being fed to powerful client machines for cohesion. In essence, the browser was the epoxy for the data & systems throughout the network. Cool stuff indeed.
The (Dubious) Optimism of Enterprise User-Collaboration The keynote on Thursday was dedicated to examining how social software conventions (Wikis, Tagging, RSS) can & should be extended into the enterprise environment. In contrast to the process orientation of most enterprise-level applications, Andrew McAfee, Harvard Business School, pointed out that there was much to gain through large scale, unstructured data sharing in an enterprise & the subsequent emergent analysis that could be leveraged as a learning system. What makes all this possible? The very cool technologies of Web 2.0: Deep Linking, Search, Tagging, RSS, Easy Authoring.
Why dubious? Only the largest sites on the Internet are proof points of success: Digg, Delicious, Technorati….we have no idea how to bridge legacy systems to a new generation of tools in an enterprise, nor the required tipping point at which critical mass is reached and emergent patterns actually become meaningful. (How useful is this really for a small/medium/large organization?) And control. Users tend to be untrusted, unsupervised, distributed, silo’d and working in environments under great rates of change – how does a manager leverage a system like this?
Can I Get a Customer? There were a ton of great demonstrations and eye candy, but I was hard pressed to find a customer talking about a successful interop implementation using most or all of the above technologies. This indicated – to me- we’re still in the early hype cycle of these technologies and lack a customer base and the watered-down expectations that accompany real-world implementation.
Adult Visual Aides; Server Racks & The Show Entrance
It was absolutely worth the day – although I would have liked to have seen more customers on-site. Can’t blame anybody given it was the last day of the conference though. Did you attend the show, or have any questions about it? Share your thoughts; I would love to hear other perspectives on the themes that were presented.
by hjanssen on October 29, 2008 12:38pm
Wednesday - Day two for my IPC in Mainz conference, which is a developer orientated PHP conference.
Very well attended. The most negative thing I can say about this conference is that for some unknown (but brilliant beyond my level of comprehension) reason the venue is a 30 minute cab drive from the Speaker hotel. And the shuttle provided in the morning leaves every 30 minutes, has 5 seats and has a line of 20+ people for it in the morning.
And yes, there are hotels closer by.
It is pretty cool to see how things changed in the last few years; people do not stop/point and stare anymore when they see Microsoft people walking around and actively engaging. People are happy to see us.
There are a lot sessions on a variety of topics, but I get the most out of talking to people outside of these sessions. I am starting to lose track with everybody I have talked to.
Anyhoot, I had a good conversation with Brian Akers. For those who do not know Brian, he is one of the people behind Drizzle. He gave a keynote yesterday that was extremely well attended and talked about the state of Drizzle, which is starting to become a really interesting Database.
One session that was pretty unique was Pierre Joye and Garrett Serack doing a joint session on how to build PHP on Windows. This used to require the sacrifice of your favorite item, standing on your head, facing the North and chanting to RA to get it to build.
The work we have done with the PHP community to make PHP on Windows the best possible platform in the past few months has been greatly improved and accelerated. All old libraries have been updated to their latest versions, something that had not been done in over 10 years for some of them. More importantly, these libraries are now the same versions (and thus have the same behavior) as their Linux counterpart.
Additionally the build system used was VC6, which means Visual Studio 1998!!. The build system is now VC9 or Visual studio 2008. And, depending on the speed of your machine, it builds in a few minutes. And viola, a brand new, shiny, hot from the oven, newly minted PHP.
Now we have a great place to start from, a build for Windows that we have all the code for, a build with a compiler that comes out of this century. That will leave us ready to do the next steps, optimizing PHP on Windows. And that is what we will be working on for the foreseeable future. If you can/want to please participate.
These changes are incorporated into the latest build starting with PHP 5.3. You can download this here.
If you want to check out in person what we did, and how you now can build PHP for Windows, check out this link.
BTW, Pierre and Garrett both have the misfortune to report to me at the OSTC. And yes, there are questions about their sanity
A few more days, and then back home. Where my wife, kid and dog claim they are looking forward to having me back again after 2 weeks. Off to get some rest.......
by Peter Galli on October 24, 2008 05:03pm
Getting external feedback is always a good thing, and something that Microsoft has been trying to do a lot more of, particularly with regard to its focus on openness and interoperability with various open source communities.
So it was with great interest that I read the trip report for the CIFS Conference and AD Plugfest, by Andrew Bartlett of the Samba Team, which is a candid assessment of the great challenges, successes and frustrations that are part and parcel of such complex arrangements.
But all that has been achieved so far is in large measure due to the passion, commitment, dedication and tenacity of the Samba team, and their openness to working towards a shared goal. This is truly a great example of how Microsoft and those in the open source community can work productively together.
While there has been some skepticism that Microsoft's focus on interoperability is simply for compliance sake, the reality is that the work the company has been doing with Samba and others are concrete examples of our commitment to openness and to working with open source communities.
So, the background to this is that, in late 2007, the Protocol Freedom Information Foundation, a non-profit organization created by the Software Freedom Law Center and Microsoft, reached an agreement that allows Samba to create, use and distribute implementation of all the protocols that allow workgroup servers to connect with Windows.
Microsoft has made protocol documentation for the Windows protocol programs available online and for download from MSDN, and developers are not required to execute a license or pay a royalty or other fee to have full access to this documentation.
As actions really do speak louder than words, it was gratifying to hear from Bartlett that Samba feels it has a "beachhead at Redmond, and a department committed to providing the Free Software community with answers or clarification on any reasonable interoperability question."
by admin on May 31, 2006 12:28pm
But Then Face to Face I’ve spent many hours over the past few days combing through the comments on Port 25, and the comments about Port 25 on other sites (blogs, industry news, etc.). I was struck by the mix of hope and suspicion. When you don’t know who you’re dealing with, suspicion is a natural result, compounded in this case by years of mistrust of Microsoft’s motives. I realized while combing through the posts and discussions that I never took the time to introduce myself.
I have been a science geek for as long as I can remember – not great for one’s social life but at least I can say I’ve been kicked out of Chemistry class for discussing quantum physics in the back row…
I’ve worked in Silicon Valley for 12 years, and before that studied and worked in San Diego. My degree is in Cognitive Science from UCSD (a combination of neurobiology, artificial intelligence, and cognitive psychology, founded by the great Don Norman). In the Valley I worked as a software engineer for several years, building desktop, distributed, and web applications in C++ and Java, including DCOM and J2EE. I worked at a couple of normal software companies, and worked crazy hours at 5 startups.
My best times in software engineering were at Ofoto (now Kodak) where I ran the web development and middleware engineering teams. We built a highly scalable photo service using Tomcat and Jakarta on Solaris and Linux, built our own Java-based persistence layer, and used XML for internal and external integration – old hat now, but this was in 2000.
After Ofoto, I went to BEA Systems as Web Services Principal Architect. I got the chance to work with the WebLogic Workshop team (aka Crossgain, the high-profile defection of Tod Nielsen and Adam Bosworth from Microsoft) and customers like Merrill Lynch on “web services architectures” – no one called it SOA back then. Workshop became open-sourced as Apache Beehive under the leadership of Carl Sjogreen (now a product manager at Google). In late 2002, I joined the WebLogic Integration team to do technical market development and product strategy. I learned a lot about software strategy and got hooked on the web services management concept, convinced the product management team to build Quicksilver, which eventually became the AquaLogic Service Bus.
In 2004 we started seeing real impact on the core business from JBoss, and my GM Chet Kapoor wanted BEA to get into the open source software game. Top management didn’t like the idea, and shortly after Tod Nielsen left the company, a few dozen directors and VPs followed his lead, as did Chet. Shortly after I’d joined Microsoft, Chet became the CEO of Gluecode and asked me to join. I couldn’t see how their business model could be defended, and stayed at Microsoft. Six months later, Chet sold the company to IBM and became VP of IBM’s Open Source group. I admit that I kicked myself.
I joined Microsoft originally to work directly with startups, in the hope that I could have a positive impact on people pouring their hearts and minds into risky technology bets. After batting 1 for 5 I know how tough it can be. Microsoft generates nearly all its revenue from partners (96%) but gets hammered on lack of innovation, so this seemed like a good fit. In Dan’l Lewin’s group in Silicon Valley (Mountain View) I got to meet and help a number of different startups. During this time, I saw advantages of open source economics for some of these companies, especially in SaaS. It was clear to me that something had to change in our licensing and pricing – two very challenging things to shift. I spent several months advocating within the company for change, with good results.
When Bill Hilf offered me the chance to join the Open Source Software Lab, I jumped at the chance. Open source is a pivot for the software industry at large as well as for Microsoft. I’m very curious to understand the breadth and depth of technologies available in open source, and deeply committed to driving interoperability between open source and Microsoft technologies.
This is longer than I’d intended, so I’ll stop here. For those who want more background, my blog is at http://samus.typepad.com, and I have to point out an interaction I had with Matt Asay, a smart and outspoken leader in the open source community.
PS: In my first post (“Why is it called Computer Science”) one reader pointed out the similarity of the topic to Paul Graham’s brilliant essay “Hackers and Painters”. While the point was made with a degree of suspicion, I’m grateful to the poster for leading me to Paul’s essay. Thanks, cblazek.
by hjanssen on January 25, 2007 11:50am
Whether deploying Windows in a Linux or UNIX environment, or vice versa, identity management can be a challenge. In order to explore this topic, Hank spent some time talking with Dustin Puryear author, consultant, featured speaker at TechX World and owner of Puryear Information Technology, LLC. Dustin specializes in solving integration challenges related to mixed environments and has penned a few books on the topic. Two of his books: Best Practices for Managing Linux and UNIX Servers and "Spam Fighting and Email Security for the 21st Century" are available as free e-books on his site.
In this podcast (Part 1 of 2) Hank and Dustin explore issues related to implementing Identity Management solutions and focus primarily on the importance of policies. While the act of deployment is by default important, without proper policies even the best solution can fall short. In the next podcast the conversation will focus on implementation strategies. If you have particular questions you'd like answered in the next podcast, please feel free to comment below.
by admin on April 14, 2006 03:03pm
Congratulations to Marc Fleury and team for their success this week in selling their company to a like-minded partner that pioneered Open Source business models.
Running a startup is hard. Keeping it going and focused is harder. Selling it while maintaining your principles is nearly impossible – but JBoss has scored a hat trick. From 97-2001, I was in a string of startups in Silicon Valley, and I know first hand just how hard it is to make them work.
When a company reaches the level of publicity and success that JBoss has achieved, there are pressures to compromise in many dimensions – exemplified by the rumored half-billion dollars that “a large database company” offered them only last week. Marc could have taken that and walked out of the industry altogether – enjoying a comfortable life on his own private island. Instead, he has chosen to marry his company to a long-term resident of the Commercial Open Source Software industry that will sustain the JBoss principle of interoperability and heterogeneous support, and is well-aligned to evolve the business model in a pure Open Source tradition.
My lab is working with JBoss on the technical collaboration project that Microsoft announced earlier this year – good, interesting work – and will be exploring new areas to work on interoperability, such as JBossWeb, an Apache-based web server with native support for .NET, Java, and PHP.
We’re continuing to read and sort the suggestions we’re getting – including emails – and looking for clusters of requests that we can turn in to projects. What would you like to see us do from a Java interoperability standpoint? What would be most valuable for you?
by jcannon on September 14, 2006 02:09pm
Over the past weekend, we discovered that some of the comments being posted to our blogs were being caught by the Community Server spam filter. Usually, this wouldn't be a bad thing - especially if you were around when we launched Port 25. However, the algorithm for catching spam had been unknowingly set to the strictest interpretation due to a recent server upgrade...so many benign comments had been caught over the past couple weeks.
This has been corrected & all comments have been set to publish. If you've had this happen to you - we apologize. Comment away! If you ever feel like something on the site isn't working the way it's supposed to - please let us know.
To make up for our oversight, the video below should provide a good laugh for anyone who works in IT - regardless of what operating system or development models you subscribe to :)
Thank you Long Zheng (I Started Something) for blogging about 4.
by MichaelF on September 05, 2006 06:01pm
On a recent visit to Thailand I had the opportunity to meet a variety of customers, open source community members, government officials and new Microsoft people. A great part of my job is the opportunity to understand the state of the software industry in different countries around the world – both in developed and emerging countries. It’s fascinating to see the patterns of similarities and often surprising to learn about the myriad of country-specific characteristics that influence the evolution and growth of a software industry. My visit to Thailand was remarkable in both of these areas. I visited at a time where the government was under some unstable conditions – although the environment is amazingly under control and calm. The state of the Thai software industry is relatively segmented, with some areas quite advanced and some under significant early development.
The role of Microsoft in a country like Thailand is somewhat different than in many large developed countries. In countries like Thailand, Microsoft participates heavily in the growth and health of the software industry. Certainly we do this in large countries as well, but it’s much more direct and hands on in countries like Thailand. Naysayer’s will claim this is so we can just ‘sell more’ to new audiences. Of course we care about software sales (we’re a commercial software business!) but in these environments we prioritize the condition of the software ecosystem as it’s the basis for any near or future business. For example, the Microsoft general manager for a country like Thailand (Andrew in the photo below – far right) will spend a good portion of their time working on country-wide initiatives for improved software education, or in cross-vendor forums focusing on improved software security, or in helping the government plan for software infrastructures for future natural disasters (tsunamis, for instance). What I find interesting, is that many people, particularly in the U.S., don’t often see this side of Microsoft and it is a very important part of our role as a business, community and industry leader to help the entire software ecosystem grow and prosper.
Related to this, one of my trip highlights was a dinner in Bangkok with some of the leading science and technology thinkers in the Thai government around the future of IT in Thailand*. Below is a photo of (left to right): Dr. Chadamas Tuwasetakul, Assistant to Director, National Electronic and Computer Technology Center (NECTEC); Dr. Pairash Thajchayapong, Senior Advisor to National Science and Technology Development Agency; me; Dr Thaweesak Koanantakool, Director, NECTEC; Andrew McBean, General Manager, Microsoft Thailand.
We had a great discussion about software for children in K-12 classrooms, the benefits and challenges to delivering country-wide computing infrastructure for environments that have numerous IT challenges (such as very few technical support staff). We also talked about commercial and free software, Microsoft’s position on OSS, standards, interoperability and our future product line – particularly Windows Vista and Office 2007. It was a great and opinionated discussion and I learned much from Dr. Tuwasetakul, Dr. Thajchayapong, and Dr. Koanantakool, their insight was highly valuable.
The software industry is going through tremendous growth in Thailand. I feel there is much to be learned from watching these next frontier software ecosystems, to see how they develop their industries in this new era of software economies, how they learn from other economies, countries and trends and most importantly how they create a software legacy that is both prosperous and uniquely Thai.
O’Reilly blogger Allison had an interesting term, technodiversity that I think is a great way of thinking about ecosystem evolution. I believe strongly that intellectual invention, innovation, and both pragmatic and expressible interoperability are keys to achieving this type of technodiversity. In an upcoming blog entry I hope to dive more into this area of pragmatic and expressible interoperability to describe why this often fuzzy term ‘interoperability’ is crucial to growing ecosystems. Warning – expect unusual correlations to trains, newts and other seemingly random but (to be illustrated) relevant examples to the subject.
* To be fair, earlier in the day I spent time with James Clark, long time OSS/XML developer and now part of SIPA (Software Industry Promotion Agency) and the leading OSS promoter in Thailand, and I would include James as one the leading thinkers on software in Thailand as well.