Follow Us on Twitter
by Sam Ramji on January 30, 2007 12:30pm
Back in November 2006, Microsoft and Novell committed to a long-term technical collaboration between the two companies. The agreement covers several areas - Virtualization, Office OpenXML/ODF interoperability, WS-Management interoperability, and directory federation.
With my colleagues at Novell, I am opening a Joint Interoperability Lab. This lab will be around for the long term, and will focus on interoperable virtualization between the Windows and SLES. This lab will be part of the product engineering teams for both companies.
In order to get the best candidates for this lab, I'm posting the job descriptions here and inviting the Port 25 community to contact me directly if you're interested in one of the positions. With Novell's permission, I am also posting the Novell job descriptions for their openings in the lab.
There are two position types: Program Manager (PM) and Software Design Engineer in Test (SDE/T).
To inquire about the Novell opportunities, please contact Brad Cutler, Director of Engineering at Novell (mailto:email@example.com)
To inquire about the Microsoft opportunities, please contact me (Sam Ramji, Director of Platform Technology Strategy) at mailto:firstname.lastname@example.org.
Microsoft: Software Design Engineer in Test, Linux Interoperability
Do you want to be part of a group that is changing the future of the operating system platform at Microsoft? Due to recent developments in the Server and Tool Business division at Microsoft, we are looking for an experienced Software Development Engineer in Test who can take on the challenging role of qualifying Microsoft’s new Longhorn Server Hypervisor based virtual machine solution in a collaborative project with Novell. This position will require candidates with substantial knowledge of Microsoft’s device driver models; strong experience in developing and testing software written in C, C++ or C#; working knowledge of Linux (preferably SLES); and knowledge of Microsoft’s server class feature and applications.
We are looking for an individual contributor with broad technical experience and a passion for developing skills in new areas. Strong planning and test design are key attributes of a successful candidate. It is essential that candidates have a proven track record of working independently and excellent communication skills. A BS (or equivalent) degree in Computer Science or Electrical Engineering is required. We are looking for individuals who possess a strong drive for results and a passion for understanding and meeting the needs of our customers. Familiarity with Microsoft’s testing standards, processes, and tools would be a benefit. If you want to challenge your technical expertise then this is the right team for you!
Novell: Software Design Engineer in Test, Windows Interoperability
We are looking for an experienced Software Development Engineer in Test who can take on the challenging role of qualifying SLES10 based virtual machine solution in a collaborative project with Microsoft. This position will require candidates with substantial knowledge of Linux device driver models; strong experience in developing and testing software written in C, C++ and various scripting languages; working knowledge of Microsoft server environment ; and knowledge of server class feature and applications on Linux.
We are looking for an individual contributor with broad technical experience and a passion for developing skills in new areas. Strong planning and test design are key attributes of a successful candidate. It is essential that candidates have a proven track record of working independently and excellent communication skills. A BS (or equivalent) degree in Computer Science or Electrical Engineering is required. We are looking for individuals who possess a strong drive for results and a passion for understanding and meeting the needs of our customers. Familiarity with Linux testing standards, processes, and tools would be a benefit. If you want to challenge your technical expertise then this is the right team for you!
Microsoft: Program Manager, Linux Interoperability
Be a part of the Platform Technology Strategy team, drive change and make a difference.
This highly visible senior program management position will have the opportunity to work in one of the core areas of growth for Microsoft. The Platform Technology Strategy group is the engine for technical analysis, including the Linux/Open Source Software labs. Our goal is to provide deep and relevant technical analysis and to deliver strategic guidance and messaging from this research.
The main focus of this position is to drive interoperability between Linux and Windows, including planning and leading the Microsoft/Novell Joint Interoperability Lab. This is a multi-million dollar, multi-year effort that will ensure high performance and availability of both SUSE Linux on Viridian and Longhorn Server on Xen.
Specific responsibilities include:
We are looking for a highly motivated individual with strong business acumen and a strong background in technology. Candidates need strong leadership abilities with proven experience leading technical teams and delivering significant and provable results. Candidates need demonstrated excellent problem resolution and decision making skills and be able to deliver results on multiple projects in a complex, fast-moving environment.
This is a high visibility role that involves strategic and technical communication at all levels. You should have a proven track record in analyzing technologies, and building programs to respond accordingly. In addition, you will have proven excellence in cross group collaborative projects, including the very important ability to drive non-reporting groups to perform and deliver. You should also have a track record of starting, building, and finishing large projects. The ideal candidate will have 5+ years of related industry experience in software development and one or more of the following: strategic consulting, program management, IT management, partner engineering management, and/or server industry marketing.
A technical background is very important to succeed in this role. 25% travel will be required. Some international travel may be involved. A BS/BA degree required; MS degree preferred.
by MichaelF on January 29, 2007 04:15pm
In December, Jamie posted a call for questions in the spirit of Festivus one of our favorite secular non-mainstream holidays (aye, we be talkin like pirates on September 19th too matey). Here is the result, or at least the first part of it.
We didn't get to all of the questions in this first pass, but we will be posting the continuation of this conversation early next week. Let us know what you think, if you enjoyed this we'll be happy to do it more regularly.
by Bryan Kirschner on January 26, 2007 12:45pm
I started this chain of blogs about the law-and-open-source–analogy based on something Matt Asay had written that struck me as interesting—but didn’t sit with me as quite right. So it seems appropriate to tie up this set of blogs with something he wrote that seems to me to be entirely right, and helped frame a comparison about law-and-open-source that makes a lot of sense to me.
I previously blogged about the fact that legal knowledge actually seemed more “open” than “closed.” I also kicked around the idea that the supply of lawyers was especially “artificially” restricted. Without rehashing those discussions, I didn’t find those to be compelling as singular ways the legal domain was different from the software development domain.
Then I read a recent blog from Matt. Blogging about companies for whom it is an “open question” if they “are abandoning some of the community benefits of open source by having some of its technology proprietary,” Matt wrote that such companies are “trying to balance being a good community citizen with getting paid. It's not an easy problem to resolve” (my emphasis).
I read this statement as characterizing what a lot of companies and individuals—open source or not, in the software business or not—are trying to do. Specifically thinking about the legal analogy, here’s what struck me.
Among friends, family, and colleagues I happen to know a lot of lawyers (probably a result of both my wife and brother being lawyers rather than a character flaw on my part (lawyer joke)…). Thinking of this balancing act Matt describes, I realized that:
One lawyer I know, a litigator at a firm, recently made a change to a corporate (non-legal) job, in which he’s making about the same as he had been practicing law. In both cases, a pretty respectable overall salary.
Another lawyer makes literally half what he did as a litigator, working in a public (government) agency.
And one more lawyer I know makes literally a third of what he did, working for a non-profit doing public interest law.
And of course, lawyers always have the option of providing their services at no charge (pro bono)—the American Bar Association recommends 50 hours per lawyer per year.
(Qualitatively I would have to say intrinsic job satisfaction among the lawyers I know is inversely related to total compensation against these four cases. The assumption here is that, for most people, being a “good community citizen” is in itself at least somewhat rewarding.)
Additionally, individuals have broad latitude in the US to represent themselves (pro se), generating a “do it yourself” supply of legal services.
For the sake of plotting everything on two axes, you could plot the total units of legal service delivered (y-axis) against the payment received by the “deliverer” for that unit (x-axis)—in this view paying someone else would be a negative payment while DIY would be a 0 payment. (I don’t know exactly the shape of the curve here…but it actually doesn’t matter much for the sake of the thought experiment.)
Now let’s think about open source code with a similar plot in mind. Lakhani and Wolf’s open source developer study found 40% of developers were being paid while developing code. So on the chart above they somewhere in quadrant 1.
Cutting the data a different way using cluster analysis, the authors show that 80% of the sample is essentially covered by work need/payment (cluster 1) or two clusters where “intellectually stimulating” and “improves skill” lead by a long shot as motivations among predominantly unpaid developers (see page 13 of the linked document). This is code in quadrant 2 of the chart. “Obligation from use”—that is, giving back, something that seems like a commitment to giving back through pro-bono work—is the far-and-away leading motivator in the fourth cluster.
If we plotted all the units of code—open and on top of it not-open—in the world, would the shape and motivational dependencies look much different from all the shape of all the units of legal service?
In quadrant 2 (no payment received by deliverer) in either domain would be a lot of things that “scratch your own itch” or are themselves stimulating and rewarding. In quadrant 1 would be whole lot of things for which these characteristics apply less or not at all (or perhaps inversely—they are painful or tedious or otherwise unrewarding…).
This brings me all the way back to the question of law, software, and “openness.”
I took issue with the idea the primary “problem” with the legal domain was it wasn’t “open” enough. I think the process of thinking through comparisons between law and software illuminates the fact there is no single magic bullet for increasing the supply of something—whether code or legal services—at lower cost. (Note I’m taking a macroeconomic and long-term view here: although I work for a company that sells software, in the grand scheme of things more-for-less is probably good for commercial interests and consumer interests alike; neither I, nor, I hazard a guess, whoever defined that product strategy for Microsoft feels “bad” that tens of millions of end-user developers can DIY hundreds of millions of lines of code using the Excel macro engine and Office object model. That product substantially “shifted the curve” and was a win-win overall.)
Making more knowledge—through exposure of raw artifacts like source code or judicial decisions—or pedagogical materials can shift the curve. So can increasing the supply of people who are trained to be developers or lawyers. So can decreasing the difficulty of delivering a unit of service or code (XNA is really interesting me in this respect….). So can make making delivering some unit of service or code more intrinsically rewarding (fun, tear-jerking, intellectually stimulating, guilt-assuaging….). So can making it more financially rewarding to deliver some unit of service or code.
Balancing being a good community citizen with being paid is a hard problem to solve because there is no one single formula for striking the right balance. But I find that to be a good thing: that translates, to be sure, into many opportunities to try to strike the right balance and fail—but also many opportunities for diverse individual, communal, and corporate co-exist, adapt, prosper—and surprise. A world where one size does not fit all may be messy but it is a lot more interesting.
by hjanssen on January 25, 2007 11:50am
Whether deploying Windows in a Linux or UNIX environment, or vice versa, identity management can be a challenge. In order to explore this topic, Hank spent some time talking with Dustin Puryear author, consultant, featured speaker at TechX World and owner of Puryear Information Technology, LLC. Dustin specializes in solving integration challenges related to mixed environments and has penned a few books on the topic. Two of his books: Best Practices for Managing Linux and UNIX Servers and "Spam Fighting and Email Security for the 21st Century" are available as free e-books on his site.
In this podcast (Part 1 of 2) Hank and Dustin explore issues related to implementing Identity Management solutions and focus primarily on the importance of policies. While the act of deployment is by default important, without proper policies even the best solution can fall short. In the next podcast the conversation will focus on implementation strategies. If you have particular questions you'd like answered in the next podcast, please feel free to comment below.
by Sam Ramji on January 24, 2007 03:00am
Today I got an email from Adam Sheppard, who leads development of Photosynth. If you haven’t seen Photosynth yet… it may be because you use Firefox and not IE. That is no longer an obstacle for those curious about the next generation of photo-management technology.
It’s a visually stunning browser-based application which lets you explore a collection of photos by navigating through an inferred 3-dimensional model of the space the photos were taken in. For example – take 500 photos of St. Mark’s Square in Venice, turn them over to Photosynth’s feature extraction and 3-D processing engine, and you get this:
The first tech preview was built for IE only. Today the team has launched the Firefox version of Photosynth. As a Firefox user myself, I’m glad to be able to show this great application off at home. It’s a 5.5 MB install, and you’ll need to grant permission to labs.live.com as a plug-in source (at least temporarily).
Hats off to the Photosynth team for shipping a great application on multiple browsers! And Adam – let us know when we can put our own collections into Photosynth ;)
by jcannon on January 23, 2007 02:47pm
This February 14-15, 2007, Port 25 will be a sponsor at the LinuxWorld OpenSolutions Summit - one of the largest Linux and open source gatherings on the East Coast. A number of folks from the lab will be attending - myself (Jamie), Michael Francisco, Bryan Kirschner and Anandeep Pannu, among others.
We're excited to be at OpenSolutions Summit - we're certainly available to meet up, discuss & engage with you if you're attending. Please feel free to shoot us a quick mail if you're going to be there. Also worth mentioning, if you're attending, we will be sponsoring a Valentine's Day Evening Reception the night of February 14th. No marketing, no sales - just a great chance to relax, talk with friends and colleagues and enjoy some free food and drink! Plus, some pretty amazing views of Manhattan. If you would like to attend, it's free, but you do have to register for the reception (and be a paid attendee of LWOSS).
As an offer to our community, if you would like to attend the event, feel free to use our discount code (N0126) when registering for 20% off the advertised price tag. The LinuxWorld folks also tell us you'll get some pretty decent seating at the keynote presentations, some registration benefits & special welcome gift. Good deal!
Stay tuned for more updates as the event approaches & coverage throughout. Looking forward to seeing you in New York City!
by billhilf on January 23, 2007 12:33pm
I can’t recall if I have ever blogged about this, but we certainly have shown interviews here on Port25 with Jeffery Snover, Architect of PowerShell. PowerShell is a command-line shell and scripting language for Windows: Think consistent syntax and standard utilities that make managing and administering Windows much easier. Powershell is not yet in the market (of course there are release candidates you can get here). But the community is growing.
Here’s some stuff I’ve found:
It’s great to see the community growth around Powershell, and I can’t wait to see this community grow even more after Powershell is generally available (and I love that a lot of this is happening on Codeplex). And remember, Powershell runs on Windows XP, Windows Vista, Windows Server 2003 and Windows Server “Longhorn”.
Cheers to the re-birth of the command line!
by anandeep on January 18, 2007 02:15pm
Prof Stephen R. (Steve) Schach is an Associate Professor of Computer Science and Computer Engineering at the Vanderbilt University.
Port 25 met up with him while he was visiting Seattle, Washington in picturesque Kirkland, Washington on the shores of Lake Washington.
Steve (he hates being called Prof Schach!) believes in gathering data to make predictions. While he accepts that there may be interpretations of data he thinks gathering the correct data is paramount.
He credits Open Source Software with kick starting the Empirical Software Engineering movement saying “We could count the number of lines of code in gcc and Linux – we couldn’t do that with Windows 95!”.
In this interview we discussed empirical software engineer/computer science and some of the work Steve has been doing. This includes his work on the proportion of time that code is in bug fixing mode and his work on global variables in Linux.
The latter work was found to be controversial by the Open Source Community. Steve thought that all he was doing was counting the number of global variable in Linux vs BSD and stating that Linux had far more than is considered wise! This was surprising to Steve, but isn’t that much of a surprise to the people who know how much passion Open Source can generate!
Steve’s website is here http://www.vuse.vanderbilt.edu/~srs/ and you can find his publications on his website.
by MichaelF on January 17, 2007 09:45am
Following-up on the announcement of the Microsoft/Zend technical collaboration from October, we wanted to make sure the Port 25 community was aware of the first set of deliverables.
The technical preview of Microsoft FastCGI for IIS 6 and IIS 7 can be found here: http://www.iis.net/default.aspx?tabid=1000051
Zend Core 2.0, which includes the Windows version of the Zend Enabler technology can be found here: http://www.zend.com/products/zend_core
Combined, these solutions provide a 200-300% performance improvement for PHP on Windows offering comparable performance to PHP on Linux.
If you try this out, we'd be interested in hearing about your experience.
by MichaelF on January 15, 2007 02:01pm
Back in August we posted a podcast with Praerit Garg regarding the announcement of the Service Modeling Language draft specification put together by Microsoft and a number of other leading technology companies. We wanted to follow-up and provide an update on the specification and the workgroup.
On September 12th the public feedback workshop was held and a good deal of feedback was provided both by community members in attendance and by those submitting feedback via email. One of the key topics was the name of this language as many felt the SML title didn’t full capture the intent or capabilities of the specification. Pratul Dublish, Senior Program Manager at Microsoft, has a blog entry regarding this discussion here (along with a number of entries regarding SML and the Working Group).
The Working Group has published the second draft of the SML specification and the first draft of the SML Interchange Format specification. It has also announced an Interoperability Workshop for interoperability testing between different implementations of the specifications. The workshop is open to companies and individuals willing to bring an implementation of the latest published specifications to the workshop. The workshop will be held during January 16-17, 2007 in Austin, Texas. The invitation for this event can be found here.
Per the terms provided in the specification, there is nothing that prevents an Open Source project from implementing the SML Spec. Eclipse has started a project called COSMOS which implements the SML runtime, modeling tools for SML, and the infrastructure to enable the use of SML for model based management. Take a look at: http://www.eclipse.org/proposals/cosmos for details.
The URL for the SML Working Group is: http://www.serviceml.org, take a look at this site if you are interested in learning more about the specifications, the Working Group’s activities, and the Interoperability Workshop.
We will continue to keep the Port 25 community up to date on the progress of the SML Working Group and the evolution of the specification. If you need help connecting with the group, please let us know.
Please take a minute to provide feedback to the Working Group if this is a topic of interest either personally or professionally.
by Bryan Kirschner on January 11, 2007 11:00am
There have been two journal articles lately that have stuck in my head:
First, Brian Fitzgerald writing broadly about the future of open source (“open source 2.0”) in September’s MIS Quarterly argues for the durability of OSS because it can achieve “a balance between a commercial profit value-for-money proposition while still adhering to acceptable open source community values.” Within this flexibility he describes how “the quintessential proprietary software company, Microsoft, can appear to satisfy the definition of an open source company, while a quintessential open source company, Red Hat, can appear to resemble a proprietary software company.”
That in itself is a lot to chew on, especially if you happen to work in the Open Source Software Lab at Microsoft.
But then Baldwin, Hienerth, and long-time user innovation champion Eric von Hippel published an article November’s Research Policy including discussion of the relationships between high-capital, mainstream manufacturers” and “low-capital, experimental user-manufacturers” in relation to innovation. They conclude that in “design spaces”—markets or products; they use cell phones and PDA’s as an example—that are “relatively easy to expand” the high-capital manufacturers and user-innovators might “co-exist indefinitely.” As I read it, one of the key concepts is the idea of “toolkits” which split the cost of designing an innovation into a capital component (the toolkit) and the designer’s decision—thereby “reducing the time and effort needed to generate new designs” and increasing individual and possibly community capacity for innovation—“rejuvenat[ing] innovation in a design space that was previously deemed to be exhausted.”
(It’s worth pointing out that as far as I have seen, von Hippel has not historically thought of this as applying to software as opposed to physical goods—a source of no end of consternation for me, because I find it to be quite elegant, even inspiring when applied to software and maybe even further to other types of intellectual property—but that’s another blog.)
I subsequently ran across a great example of real-life business and engineering decisions by the Education Products Group at Microsoft that are a fascinating case study to which to apply these two idea. The Sharepoint Learning Kit (SLK) is (in essence) “a toolkit” for Microsoft’s SharePoint2007 designed for teachers that is being released under a Shared Source license.
We’re very pleased to have Mike Hines, a product manager with this group, stop by on his way to BETT 2007 (the Educational Technology Show) come tell us about the engineering, licensing, and business decisions they made, why —and what’s started happening as a result.
In subsequent interviews we will turn the focus on some of the cool technical work being done in community, commercial , and public-sector projects using SLK such as the project he mentions in Kent.
Details on SCORM certification: http://www.adlnet.gov/scorm/certified/index.cfm?event=main.product&certid=196 SLK FAQ: http://www.codeplex.com/SLK/Wiki/View.aspx?title=SLK%20FAQ SLK Contact: email@example.com
by billhilf on January 09, 2007 12:00pm
You may have heard of the wild wind storms we had in the greater Puget Sound area in the weeks before Christmas. It was intense. I lost power for eight days and was without cable (and Internet connectivity) for five on top of that. The surrounding area was a disaster in the days after the storm -here are a couple photos from the main road to my neighborhood:
It was easily the worst storm I’ve been in and we were fortunate not to have any significant loss or damage to our or our neighbors’ and friends’ homes. Many were not as fortunate.
It certainly brought a lot of people close together, to have meals together, to get together at friends’ homes with generators and heat, and to let out our frustrations and to celebrate.
To say many people in the area are now out buying generators is an understatement. The local Home Depot’s are cleared out. I have a plan myself, but this experience did certainly get me thinking about the critical dependency I (we) have on electricity. Absence makes the heart grow fonder.
Certainly disaster can always strike, and preparedness is important, but the most valuable lesson for me in all of this was the importance of family, friends and a community that comes together in times of need.
Happy New Year. And I promise my next blog will have something more related to technology!
by hjanssen on January 08, 2007 07:05pm
Locate soapbox - Place soapbox – Make sure I do not fall off or thru said soapbox – Stand on top of soapbox –
Start with a small cough, take a deep breath and begin………………….
The people I work with in the OSSL group know one of the easiest thing they need to do to push my buttons. The mere mention of Web 2.0 results in tirades from me that usually result in comments that are not fit for print.
Yet, I have decided to write a blog about this latest phenomenon called ‘Web 2.0’ or as I like to say, ‘the web that wasn’t.’
When I started this blog, I was sitting in the Barcelona airport on my way back from a presentation I gave at TechED 2006. The talk was about what we do here at the OSSL. But that really has no relevance to this blog.
While sitting in the lounge at the airport I was reading an article in USA today that was left by somebody on the chair next to me. (I wonder if that makes me cheap?) The article was written by Kevin Maney. (I honestly have never met him) And it was called ‘Packed Tech Summit With Vats of Yahootinis Ring Bubble Warning Bells’ (The USA Today in question was from November 16, 2006 – Page 9A)
Not really a title that seems to have anything to do with Web2.0, but if you can get a hold of the article, I highly recommend reading it. It very much describes the Web 2.0 phenomenon, and he draws comparisons with the big telecom 2000 tech bubble. I will not get into the article here; I will leave that as an exercise to the reader of this blog. But it was the catalyst for me to finally write a blog on what has been bugging me with Web 2.0.
Having worked with OSS since the very first Linux kernels came out, and with Unix at AT&T for quite a few years prior to that, I have seen a lot of great and sometimes not so great changes happen. But one of the things OSS allowed you to do is take it and for the most part do with it (or to it) what you want. To make it serve your purpose. It is a very evolutionary way of creating software. The strong and useful survive; the weak and useless do not.
For me there is no such thing as Web 2.0. There never was, and there never will be. The whole mistaken concept of what Web2.0 is is something that actually completely flies in the face of what I believe OSS has stood for. OSS has always prided itself for its independence, its freedom. Not being able to put a label on. Where the web or the OSS movement is today is in large part due to natural evolution. And the great thing about natural evolution is that you never really know where it ends up. It is always changing. So putting a label on something, a label that basically is used by the ‘establishment’ does the concept of OSS a disservice. And when that label seems to imply a version of something I really get uncomfortable. Anandeep Pannu send me an interesting link to a cartoon that very nicely sums up a lot of my feelings on this subject, you can check it out here:
The cartoon seems to be done by the same person who does the cartoon in Linux Journal, It has the same characters and identifiers on the cartoon that Linux Journal seems to have.
And if Web 2.0 truly was such a thing that the OSS community was actively working towards can somebody than please tell me what Web 1.0 was? Or more importantly, please let me know what web 3.0 is _before_ it takes place. Not define something after it has happened.
When I explain to people my description of Web 2.0, I describe it as a big bus that has all kinds of developers on board (OSS and Commercial ones) they where all writing really cool stuff, and the never really did it with the idea of putting together what is now termed Web 2.0. Than there where the people that where running outside of the bus trying to slap stickers on it with Web 2.0 printed on it. It was something that was done not by people on the bus, but people that where running outside of it trying to keep up with it. Actually the people on the bus where not even aware that it was happening.
I have spoke to a lot of OSS developers, at conferences and thru contacts that I have, and often Web 2.0 comes up. And I am struck by the similarities of their view on it compared to my views.
For me one of the really cool things that has happened with the web in the last few years is that a lot of it was written by people who had a passion for solving or accommodating a vision of what they had (Both OSS and Commercial developers). The fruit of this labor then was merged and used by end users in such a way that the original creators never thought of. Which is the great thing about it!! Technology was adapted/used/applied by people around the world to solve or create things they really wanted. A really cool way of developers and end users getting together without there ever having been a plan to do so.
Often software is created for a specific purpose; a lot of technical innovation goes into it. And frequently we forget about the people who will end up using what we create. What is happening on the web (and I am starting to see it in other software areas as well) is that other developers or in a lot of cases end users have put things together that where never thought of to be put together.
If we could agree on calling it something other than Web 2.0, which for me invokes a clearly defined software release. Something it certainly is not. Maybe call it something like ‘The unintended web collaboration framework’ (Can you tell I do not work in Marketing, my slogans would not sell water to a dehydrated person in the desert!)
The web is an evolution; I for sure am not smart enough to know where it is going. But I am looking forward with great interest and enthusiasm how everything will look like a year or more down the road.
Takes a small cough – Steps down from his soapbox – Puts soapbox away –
by jcannon on January 05, 2007 04:21pm
In our final video interview featuring University Hospitals of Cleveland and their new Physician Portal, we have the opportunity to meet Joe Yelanich, Sr. Account Executive from First Consulting Group (FCG). FCG has a long history in the medical IT industry, and choose to deploy their solution on top of JBoss JEMS and Microsoft technologies.
Joe offers insight into the considerations and platform choices made from a consulting perspective, and why FCG chose a mix of JBoss, Java and Microsoft technologies to build their healthcare portal. More information is available on the technical and healthcare aspects to this solution in the previous segments. More information on FCG can be found on their website.
Format: wmv Duration: 6:58
by jcannon on January 04, 2007 04:31pm
Yesterday, we published the first of three video interviews with University Hospitals of Cleveland, discussing their new Physician Portal, a healthcare solution built on top Windows Server and JBoss middleware system. Ed Marx, Hospital CIO, discussed the IT needs of the hospital, and how they arrived at a solution that built on - and took advantage of - both open source and Microsoft technologies.
Today, we get an opportunity to speak to Dr. Nathan Levitan, Chief Medical Officer & Senior VP of University Hospitals, and Dr. Ed Michelson, Chairman of Emergency Medicine. Each doctor provides insight into the healthcare impact of how different, disconnected legacy systems materially impacted the quality of care provided to patients, and how the implementation of the portal has been successful in linking patient information among, and within, its various network of 150 locations. In addition, management, usability and security are also covered from an end-user's perspective.
Format: wmv Duration: 10:16