Follow Us on Twitter
by MichaelF on August 22, 2006 03:30pm
We wanted to follow up on this post from June 14th: Shared Source Development Contest and share the results of this contest.
Results can be found here: http://www.windowsfordevices.com/news/NS8278694574.html
Congratulations to Port 25 readers: Marcelo Van Kampen and his teammates: Lucas Berinotti, Evandro Rezende and Rafael Teixeira who took third place with their Street Blog project!
From left to right: Marcelo, Lucas, Evandro (Rafael is behind the camera so you'll have to use your imagination)
Nice work guys!
by admin on May 22, 2006 12:47pm
We had a great week here because we all got to spend a lot of time out of the office meeting people from all over the world who came here to attend an event in Seattle. A lot of people I ran into had specific and pinpointed questions on various technologically perplexing scenarios they had encountered or were at the forefront of some hard questions from their customers on technology management. Something in these conversations always sticks in my head and today, I learnt some very surprising details on the "push" for automation, process as well as technology based. What I heard in these discussions reinforced the core fundamentals of Technology Management such as “never replace an expert with what you think is a good application”. Let me explain what I mean:
Let’s start by discussing the role of automation, in general, be it scripting or process based and the role it plays in the life of an IT Operations Professional. I uncovered two scenarios, the first where automation was a key in driving efficiencies, increasing reliability, predictability and lowering TCO. The second scenario elicits a situation where unnecessary automation was implemented instead of the need of genuine expertise, with an undesirable outcome of course. Scenario 1 – Successful implementation of Automation: The need for automation is almost always driven by a business need mapping back to a simple, repeatable process. Let’s say one of the tasks you’re responsible in a large environment is maintaining and updating DNS records. Take something as simple as changing a DNS record and assigning a new address to the entry. This would be a perfectly simple AND repeatable process that screams the need to automate. To put a simple script or a comprehensive tool around such a scenario would be prudent and wise as it takes the repetition out of the job, and making the overall process less error-prone, more automated and dependable
Scenario 2 – Unnecessary use of automation: The same process can only be automated only to a certain extent, after which human expertise becomes critical. Continuing the path of the process in Scenario 1, once the DNS records get changed, it’s very easy to setup an additional script or automation around “validation”. By validation I mean double-checking the change to make sure that the prior and successive entries are accurate leaving no room for any error. Why is validation necessary – well, because once the DNS records are changed and are propagated through the environment, an incorrect entry can wreak havoc and make the busiest server unresponsive to any name resolution request. However, having a resident analyst who can validate all entries of the request, check the addresses manually before entering them into the script/tool and doing post-change validation only helps eliminates the scope of an “outage” , the one word every Operations Professional dreads
In conclusion, I’d like to say that our goal as contributors to Port25 is to always try to put forward instances and knowledge that help the IT Community run their environment/s. Therefore, if there is something specific, be it technology or operations methodology that you'd like us to dig deeper into, your ideas are ALWAYS most welcome.
Thank you all, have a great week ahead !!
by Peter Galli on April 15, 2009 05:54pm
I noticed today that my colleague Jeff Jones in the security group is launching a metric project that appears to be leveraging some of the good bits of open techniques.
I touched base with him briefly and he gave me a little more information about Project Quant, which is being undertaken along with Securosis, an independent security research firm.
Project Quant will be working on the metrics of patch management and is as much an experiment of a new research process as it is one of security metrics, said Securosis founder Rich Mogull in a blog post.
"For this project Jeff wanted to be involved, but also asked for an open, unbiased model that will be useful to community-at-large (in other words, he didn't ask for a sales tool). Rather than us developing something back at the metrics lab, Jeff asked us to lead an open community project with as much involvement from the different corners of the industry as possible," Mogull said.
While he also acknowledged that it is risky for Securosis to allow direct involvement of the sponsor, the company is hoping that the process works the way it thinks it will and which also happens to match Microsoft's project goals.
So, this is what's expected to happen: a project landing site has been set up at Securosis that will contain all material and research as it is developed; every piece of research will be posted for public comment and no comments will be filtered unless they are spam, totally off topic, or personal insults.
All significant contributors will also be acknowledged in the final report, although there will be no financial compensation for contributors and the project itself will retain ownership rights. All material will also be released under a Creative Commons license, with spreadsheets released in both Excel and open formats.
"In short, we are developing all research out in the open, soliciting community involvement at every stage, making all the materials public, acknowledging contributors, and eventually releasing the final results for free and public use. The end goal of the project is to deliver a metrics model for patch management response to help organizations assess their costs, optimize their process, and achieve their business goals. Let us know what you think, even if you think we're just full of it," Mogull said.
For his part, Jones told me that while he has been zealous in past reports about using repeatable methodologies, pointing to his source of public data, and outlining his assumptions step-by-step, he would like to take transparency one step further by developing models and methodologies first, in an open and transparent manner, so that everyone can agree on the pros and cons before the methodologies are applied.
"I think being completely open and transparent will help credibility since, similar to open source, everyone can scrutinize every step of the analysis ... creating open models and potentially getting community involvement just seems to be the right process," he says.
I plan to interview him at greater length in the next few weeks, so look for a follow-up blog then.
by MichaelF on September 28, 2006 06:31pm
In my last blog I started talking about the power of analogy and metaphor, and dove into a discussion of the first analogy of my collection, asking what if the practice of law, rather than being like a domain suffering the consequences of a “failure of openness,” was more like an example of a domain with a great deal of openness. I promised to offer some ideas for analogies that helped make sense of the situation in my next blog.
I was subsequently intrigued by where some of the comments to the post (unexpectedly) led me; thus, I postpone those (original) ideas in order to share some of where those comments led me on the path toward new ones. In this post, I will address the first of two substantive issues: the analogy made by CDarklock between the legal profession and open source software development (--in his view, to the disadvantage of OSSD).
Comparing “do-it-yourself lawyering” with “do-it-yourself usability,” he implies a lack of sensitivity among developers to the risks of the latter relative to the legal profession’s diligence with respect to the former.
Surely enough, when pointed to Groklaw by stats for all, front and center is a warning of the risks of the former:
IANAL. I am a journalist with a paralegal background, so if you have a legal problem and want advice, please hire an attorney.
This prompted me to think about some quick phenomenological test to determine if similar evidence for this type of distinction made by journalists commenting on law (Groklaw) might in fact be recognized with equal alacrity by open source developers commenting on usability: in fact, in defense of OSS developers, there are indeed cases of IANAUE (“I Am Not A Usability Engineer”)—although AFAIK it has not made Wikipedia yet, in contrast to variants of the apparently fecund IANAL which has spawned IANYL, TINLA, and IAAL. (WTF?)
At this point you’re probably wondering why I was so engaged on this topic—the first reason is that, IMHO, challenges with consistent and effective usability practices are endemic and impactful. (I will never forget my introduction to the usability disipline: I was helping usability engineers build a stochastic model on top of their user observational testing of a web experience which (thankfully) has since been improved. The model was pretty cool as a quantification of how much customer time and effort things like ambiguous terms and redundant links actually wasted—but nothing seared the importance of usability into my brain like watching a test subject (a middle-aged, tech-savvy woman with, as I recall, a PhD—kind of hard to blame the user) actually start to cry in frustration as she tried to complete a task.)
The second reason is that I had never considered the possibility of an analogy between “ legal self-representation” and “developer self-usability” as conceptually similar problems to be solved. This analogy offers a different (and interesting) way to think about why it occurs and what to do about it in OSSD, in contrast to traditional corporate development where the origins of usability challenges nor their resolution seem to me to be fairly straightforward: does a company (or development group) recognize the value of good practices, resource for it, make it a priority, test against user interaction metrics etc.
In fact that there is a paper (Nichols & Twidale, online at First Monday) which provides a comprehensive assessment of usability in OSSD, with suggestions for remediation that come awfully close to echoing ideas for what folks in the legal profession would call increasing access to justice (--like academic volunteerism and corporate involvement).
It’s an interesting line of thinking both because it is just in time for CSCW 2006 (Computer Supported Cooperative Work—anybody going?) and because folks in the OSS community (just like in the legal community) are doing some “out of the box thinking.” I’ve been trading mails with a PhD candidate at Penn State who has outlined a very thoughtful research agenda on OSS and Usability—I’m pleased to say we’ll be bringing her to Port25 for an interview near the end of October.
With that said, the next point release on our path to part 2 will address an issue raised by ssjdrn who surfaces (in my words) a tension between two principles I have always taken for granted: efficient signaling and disciplinary control through normalization that is keeping me up at night (literally—when Spence and Foucault, respectively, aren’t jibing for me, I can’t sleep. You can ask my wife.)
And yes, it all does come back to an even richer –than-anticipated analogy between law and open around making successful software.
by MichaelF on March 02, 2007 06:04pm
For those who weren't aware there is a series of blogs penned by Todd Osagawara and Matt Asay on O'Reilly's OnLAMP site called: Inside Port 25.
Todd and Matt have been writing about content on Port 25 as well as industry news regarding Microsoft and Open Source. If you're interested in reading some other perspectives on some of the topics we cover, check it out.
Here are links to some of the latest blog entries:
Todd's latest: http://www.oreillynet.com/onlamp/blog/2007/03/interview_with_ian_gilman_micr.html http://www.oreillynet.com/onlamp/blog/2007/02/sms_notifier_for_pocket_pc_oth.html
Matt's latest: http://www.oreillynet.com/onlamp/blog/2007/03/microsoft_hiring_its_way_into.html http://www.oreillynet.com/onlamp/blog/2007/03/microsoft_goes_after_young_blo.html
by Paula Bach on September 10, 2007 04:59pm
Developing software has been an engineering discipline with formal methods. The evolution of software methods has ranged from the now outdated waterfall method to formal specification languages with precise semantics. Despite having methodologies, software engineering continues to be difficult. Yet despite having what seems a lack of software engineering methodology, open source software development can produce stable, useful software. In 2002, Fred Brooks gave a talk at CMU discussing the design of software. You may remember Fred Brooks from such publications as The Mythical Man Month and No Silver Bullet. In the talk at CMU, Brooks focuses on the social issues surrounding software engineering. Likewise, Microsoft Research (MSR) recently initiated a new group called Human Interactions in Programming (HIP). This group studies the social aspects of software engineering. Their joke is that they “ build tools as if software were made by people … working together. This quote, taken from the technical report, outlines some social issues software developers experience in their day-to-day work. Another way to look at my dissertation project is as an extension to looking at social issues in software development. While the HIP group at MSR studies human interactions among software developers, I extend that and study the human interactions among software teams (or project members): developers, project managers, and usability experts in both proprietary and open source software development environments. As I mentioned in my last blog, I am studying the role of usability expertise in both software environments through surveys, interviews and observations. I have previously reported on the open source software survey and observations. In this blog, I am reporting on the interviews I conducted internally at Microsoft. I spoke to eleven employees who are working on various projects at MSFT in various roles including program managers, developers and user experience researchers and designers. I spoke to junior and senior staff and well as leads. Program managers are responsible for the feature that their team designs and builds. Developers write code. User experience designers create mockups and give feedback in design meetings and user experience researchers collect field data and conduct usability studies. Travel Interlude I wrote the above section before I left Redmond and now I am back on campus at Penn State. I had to hurry back and drove straight through from Redmond to Minneapolis. We left Redmond in the afternoon and stopped in Spokane for dinner and left at dusk. Spokane looks like it is growing and as such has some money injected into its economy. We drove through the night passing through Idaho and through Montana the next day. We passed through North Dakota late in the afternoon and stopped in Fargo for dinner. After dinner was the biggest rain storm I have ever seen – and I am from Vancouver, Canada where it rains a lot. I was driving southeast to Minneapolis on I-94 and slowed down to 10MPH because the rain was pelting down and blowing so hard across the road that it was like a whiteout. I could barely see five feet ahead. We made it to Minneapolis (avoiding the I-35W bridge area) at about 1AM and checked into our hotel. The next morning we awaked late, had breakfast, did some grocery shopping at a favorite natural food coop called the Wedge. I used to live in Minneapolis when I worked at Unisys and Promedicus--a startup that made decision support systems for physicians that died with many other dotcom startups. It was nice to be back to The Cities. After our deserved travel break we went to Madison, WI to visit an old college buddy of my husband’s. They used to play in a band together. His buddy Frank still plays. We stayed there way longer than planned, but it was fun to catch up. We ended up reaching our next destination really late. We ended up in South Bend, Indiana (home of Notre Dame) when the sun started to rise and the birds began chirping. We were so tired the next day that we forgot things in the hotel, including a credit card! We did not notice it missing until we got to Toledo. Luckily it was the last leg of our trip and we made it back to State College, PA around 11PM and slept in the next morning ready to move back into our townhouse on campus. End of Travel Interlude. Now that I am back on campus, I have had some time to reflect on the interviews. They uncovered a variety of interesting things. Overall the eleven people I interviewed were very enthusiastic about the research and most wanted to see results. Again, I have not analyzed anything formally yet, but all of the people I interviewed mentioned that communicating design changes was very challenging especially when it comes to usability issues. It seems like the biggest challenges relate to power relationships (not their words) among the team members and the ability of the person with usability expertise and training to gain trust with decision makers. A prevailing problem is that some people tend to think they are usability experts even when they are not trained and if they are more pushy or otherwise in a position to make the final decision, usability might be compromised. Of course many other factors weigh into the usability of a product, but overall it seems that the usability experts are being heard one way or another. In comparison to usability in open source, a large proprietary software company has more resources for bringing usability expertise into products, but the social dynamics appear to be as complex as in open source. The only difference may be the characteristics of the dynamics. In my observations online of open source usability discussions, most of the interactions seemed to be devoid of such social dynamics, except for one group about one issue. So in comparison, open source might not have the same kinds of power relationships because the roles are not as differentiated. As I continue to investigate the characteristics of usability expertise I will see what open source interviews turn up. Stay tuned.
by admin on May 08, 2006 08:51pm
Feedback loops are critical. One of my favorite experiences working with Open Source Software came in 1999 when I was working at eToys.com. We used Apache and mod_perl heavily, and we also did some interesting proxy/caching work with mod_proxy. At the time, since we were one of a very few high volume web sites doing this type of work with Apache and mod_proxy, we found a variety of issues. One of the issues we found was a bug (in an unusual code path) that allowed a users session id to be cached in the Apache server and then accessed by a different user. Unintentional sharing of sessions on an ecommerce site is typically not a good thing. So we wrote a very simple and short bit of code to fix this and submitted it to the mod_proxy maintainer. It was accepted and we got our first ‘+1’ open source commit – and felt like minor champions that day. However, the developer discussion email list started percolating with complaints that this change broke a few other Apache+mod_proxy configurations on different platforms than I had been testing and running on (eToys.com was largely Linux based). So we quickly (within minutes) saw emails on the list and some direct to me saying the patch busted some of their Apache configurations on Solaris, HP-UX or BeOS or whatever it was I wasn’t testing on. I was a little surprised, but also amazed at the immediacy of the feedback. Then I had a sinking feeling – what else might I have broken, but haven’t yet received feedback on? Knowing that I was not that good of a programmer, worry led to fret and fret led to genuine stress. In discussions with the package maintainer, we agreed to back the patch out. Although a little disappointed it made me realize the power of the community feedback loop, but also the ad hoc nature of how open source software was tested, basically by other developers on whatever they may be running.
In some ways this is good and helps drive the Darwinian nature of the bigger OSS projects. In some ways this is scary because it is really an inconsistent testing model. Andrew Morton recently discussed this issue about Linux kernel testing. So how do you tell when the community feedback loop is significant enough (either community size, developer talent, frequency, methodology, tools, etc.) to represent enough critical mass that your OSS project will be used and vetted broadly enough to represent some degree of satisfactory testing coverage? I have some ideas, but I want to open this one up for discussion: what tactics, strategies or even instincts do you use to assess the quality of an OSS project?*
* Also, I’m quite familiar with the various OSS maturity model projects out there. However, I’m skeptical as most of these that I’ve seen are driven by consultants who use these as tools to charge you a service for helping you to assess a given piece of OSS. This is fine, but it isn’t the question I’m asking (consultants have been doing this for years). However, there is the Carnegie Mellon West led OpenBRR project that looks interesting, but I know many of you have ideas on ways you have done this in the past – so in short I’m interested in your experiences.
by jcannon on July 28, 2006 01:36pm We'll have more videos and blogs to come on OSCON2006; in fact, later today we'll be posting an interview with Mindtouch, as well as trip reports from Hank & Anandeep next week. For now - we wanted to share a few pics we snapped from the show floor for those that couldn't make it all the way to Oregon. As you can see from our first picture, the weather stayed beautiful for Day 3 at OSCON :)
Greenplum struck me as interesting startup with remarkable passion - especially the keynote delivered by Scott Yara, which challenged the open source community to stay dangerous in the face of establishment thinking. I believe O'Reilly is starting to post the presentations, so you may want to check back. His keynote was eclectic, and was appropriately titled, "School of Rock" - a discussion that thematically drew connections between the disruptive nature of open source and the way rock'n roll changed music in the 50's, 60's and 70's.
Some pics from the show floor - and our vantage point...no shortage of interest around O'Reilly.
We also had some time to catch up with colleagues from Microsoft who bravely hosted a BarCamp session on Microsoft and OSS. Tim Heuer, Anand Iyer, Woody Pewitt and Sara Ford all deserve a pat on the back. They also have some great write-ups on their experiences at OSCON as well.
More on Mindtouch later this afternoon.... -jamie
by MichaelF on October 12, 2006 02:30pm
Capturing the sentiment from my colleagues, Anandeep, Hank and Sam – I LOVE THIS JOB !!! Last week I had a chance to meet w/ Mozilla, watch Sam interview Steve Wozniak and the wonder of it never ceases to amaze me. This week, we had a chance to have lunch w/ Barry Crist, the CEO of Centeris, Krishna Ganugapati their VP of Development and Chuck Mount, the VP of Marketing. Centeris is a company based out of Bellevue, WA that makes the Likewise product which allows Linux Servers to be manages within a Windows centric environment. We all got off to a great start in our discussion because one of the core and common goal that ties us together to the charter of partners like Centeris is “Interop”. Yes, Interop, and finding more and better ways for Microsoft and non-Microsoft platforms or products to co-exist and thrive. This is a really important charter for not only the Microsoft Open Source Software Lab but for also all of Microsoft. After brief introductions to the Program Managers of the Team and our beloved Penguins, we got down to discussing what Centeris as a company was all about and more importantly, what is it that “Likewise” did. Barry and Chuck gave us a very good insight into what the overall focus of Centeris was about and why there is a prominent need for providing this functionality in a heterogeneous environment.
If you’re an ITPro managing a small, medium or an enterprise-wide shop, you know how diverse and today’s implementations are and/or can be. This translates to greater complexity while managing your environment, which as the market data will tell you, is rarely single-platform centric. Thus, making accommodations for manageability of the diverse platform-portfolio is a skill that we all much acquire sooner than later. This is where Centeris fits in perfectly for several reasons because it is extending Windows-based manageability and windows-based tools towards day-to-day management of Linux servers and improve interop. This also means that organizations that have tight budgets can continue to manage their environment with existing skill-sets under tight budgets.
The way Likwise works is where the console is installed on the Admin’s machine, the agent (which is an open source product) is installed on the Linux Server/s and using the console, these servers are managed using the Microsoft Management Console (MMC). Likewise Open Agent includes server-side components (that work w/ Samba) and client-side components (that work w/ MMC) The functionality that is extended to the Linux systems is possible through RPC’s and SOAP (Simple Object Access Protocol). Likewise open agent is available on sourceforge.net and has been released under the CDDL (common development and distribution license). We found the approach that Centeris took towards Linux manageability to be very simple and ITPro centric.
The highlight of our discussion yesterday was getting to know more about Krishna Ganugapati. Krishna spent 10 years at Microsoft from 1993 to 2003 most of it in Windows development team. After we got into deep discussions, we found out that Krishna was the inventor of ADSI (Active Directory Services Interfaces), the preferred means for accessing Active Directory. Krishna also led the development teams for Windows IPSec and Window wireless security through the Windows 2000 and XP releases. The interaction that followed between all of us, penguins, PM’s and Krishna was very rewarding. Krishna got into the guts of how manageability is being approached as a concept by Centeris. The big takeway after we saw the Centeris demo, for me was that there doesn’t always have to be a steep learning curve every time new technology is introduced into the environment. Sometimes, its easier to manage new technology with familiar tools and that was a very novel concept that I walked away with yesterday. It also affirmed my faith as to why “Interop” is as prominent, as important and as critical as it is to us and to the success of Microsoft.
Thanks Centeris !!
by jcannon on July 11, 2006 06:17pm
Part 3 – Adaptation and simulation of Heterogeneous environments under lab conditions
A simple question that has always perplexed me is how software and hardware OEM’s across the world simulate heterogeneous environments under lab conditions. I have witnessed several different approaches, practices and stages of this adaptation and each one of them is unique and correct in its right and merit. I guess, that leaves the “big” question which remains unanswered i.e., how do you bring a “real-life” scenario and manifest it under lab conditions. This is even more challenging because the average test lab for a medium to large organization is no match to the size and complexity of its elder sibling, the Enterprise Data Center, running its production systems, applications and operations. So why squeeze all that complexity into a smaller scale ? Is there one perfect method?– of course not, depends on what heterogeneity means to you/your business. Let’s look at this and why it’s necessary and also share some techniques that may be helpful.
Start with why it’s necessary to represent if not an equivalent amount of heterogeneity within a lab but a comparable one. Start with simple logic – why do we need a lab in the first place ? In most cases it’s an environment we can turn to and run processes, tests and simulations which we dare not try in a Production Environment. However, the caveat here is that if we do want to test a tool or an app that we’re about to roll into a production environment, our best bet is to test it in the lab with conditions mirroring as closely to the production environment as possible. It’s also a place where we can develop workarounds, fixes, documentation, implementation practices and as much supplementary support mechanism as we’d like before we bite the bullet and push the tool or app into production. The expectation we keep in mind when we do that is that results from the lab and production rollout should bear a resemblance like that of the “Partridge Family” and hopefully not of the “Manson Family”. Okay, bad joke but you get the point.
Now on to “Tips and Tricks” to help with the process of adaptation and simulation of a lab environment that mimics your production one. Here’s what I found useful:
And finally a small anecdote to help put things in perspective. In my past life, I remember several years ago when I was still on the east coast, I worked on implementing an asset tracking tool for desktops spread through the environment. We tested the tool on individual desktops and did not care about running the entire scenario using network connectivity across the simulation. We were told by the vendor that the tool uses less than 1% of CPU as negligible amount of memory. After random tests, we rolled out the tool and the purpose of the tool was to run a script and send the results back across the network. However, due to ACL’s in place, which we forgot to account for, and lack of validation of packet delivery, the desktops stopped responding. This was an expensive lesson in why we should test the waters to the best possible extent before setting sail.
Just a few thoughts and hope it triggers some more for everyone out there. As always, please do let me know if that has been useful and/or if you have a specific topic in mind you’d like us to write about.
by MJM on March 18, 2008 11:05am
A couple of months ago, I mentioned that Microsoft would be sponsoring Computer and Information Technologies Section of the American Sociological Association’s (CITASA) pre-conference and graduate workshop on July 31, 2008 in Boston, MA. That sponsorship included the "Microsoft CITASA Port 25 Award" to recognize excellent research on open source software development. CITASA chair, Keith Hampton of University of Pennsylvania, recently announced that Yuwei Lin from the Centre for e-Social Science, University of Manchester has received the award and will be keynoting the CITASA 2008 Pre-Conference. Yuwei's describes her research interests as follows:
"Free/Libre Open Source Software (FLOSS) studies, Science and Technology Studies (STS), virtual communities (in connection with e-collaboration, e-learning and e-society), usability and user requirement analysis (particularly in the area of e-Social Science), digital culture (especially in relation to hacker culture), and the cultural and socio-technical dynamics in community-based innovation systems. Other research interests include gender and ICTs, the digital divide and glocalisation of information technologies, innovation and knowledge dynamics. Additionally, her research also seeks to contribute to the genre of virtual methodology and online research methods by which researchers use new ICTs as a medium for social research itself."
(You can learn more about Yuwei's research and publications at http://www.ylin.org/) As open source matures and diversifies as a development model, interesting and challenging issues are surfacing. I’m very excited that Microsoft is supporting groundbreaking research to understand and address these issues. In the modern IT environment, community is a vital part of software success. Through the research of folks like Yuwei, community characteristics like collaboration and distributed development will become better understood and more successful.
by admin on June 06, 2006 03:30pm
Research Strategy Corner: Disambiguating “Open”
Disambiguate (transitive verb): to establish the true meaning of an expression, regulation, or ruling that is confusing or that could be interpreted in more than one way
I’m a Research Strategist with the Open Source Lab here at Microsoft. When folks ask what that means , I usually tell them the second-best definition of “strategist” I’ve ever head is “a researcher who gets to make stuff up”—but the first-best is “someone who establishes a series of steps to achieve a goal.” The latter is what my job is all about—in the process, because it involves synthesizing both technical information and insights from computer science, organizational science, and sociology research, I sorta get to make stuff up—as long as the math works (which is probably a little bit different from what the average civilian bystander thinks of as “making stuff up.”)
What’s the goal? The title of this post summarizes it concisely: disambiguating “open.” When the phrase “open source” is used for example, it could represent a simple descriptive statement of fact about code visibility (read any good mash-ups lately?); it could also be referring to software artifacts available under a fairly wide range of license types…or it could be intended to refer to something compliant with a very specific set of criteria like the Open Source Initiative’s ten-point definition (http://opensource.org/docs/definition.php). It could be referring to one of tens of thousands of single-developer projects on SourceForge—or to highly coordinated efforts like FreeBSD; and on the other hand altogether, it could be in marketing materials from big corporations like IBM or Novell (if this animation is still up on Novell’s site you can experience a dizzying array of suggestions for what “open means to your enterprise:” http://www.novell.com/solutions/?sourceidint=hdr_productsandsolutions ).
“Open” is one of those words that today in the software domain is increasingly becoming probabilistically uninformative…the word as applied to an endeavor (like a software development project) or an artifact (like a piece of software) less and less enables you to more accurately predict attributes of the endeavor or artifact—because attributes of the endeavor (like who built it and how) and the artifact (like architecture, coupling, interaction paradigm) may actually help you better predict what will happen now and further on down the road if you choose to take a dependency on something.
I don’t care how the characterization “open” makes you feel, whether it is nervous or giddy with excitement: my objective function is the ability to use one bit of information to reliably predict other bits of information. In this space I’ll share our efforts to go about doing that with regard to “open” and what we find in the lab and in the world of academic research—but first I’ll give you some visibility into how I start out structuring lines of inquiry.
There are a few different approaches you can take to things: for our purposes here, an analytic approach is like an argument from first principles—the position of Free/Libre software advocates is essentially analytic, as their argument is software should be a certain way given the set of principles they start from, and there really isn’t any evidence you could collectout in the world that would change their minds. This discussion isn’t particularly relevant to my day job. An empirical approach is all about data and probability: if you know foo you predict bar better. This is entirely relevant but is exactly what we don’t know enough about yet. A phenomenological approach (http://plato.stanford.edu/entries/phenomenology/ if you really want to know) starts with experience as it is experienced--and this is useful for starting to disambiguate open source. Here’s why: I don’t want to argue about what’s open and what’s not or about whether things should be a certain way—I want to build an informative base of data that lets me characterize and analyze endeavors and artifacts underneath this fuzzy term “open”. So we can start by asking: what would I—and, if there’s enough established shared meaning, other people—experience as a phenomenon that is certainly “like open source” versus certainly “not like open source.” I suggest two sets of statements along an axis shown in figure 1, below. Once we have this down, we have a starting for the collection of empirical data that ( if our starting point is right) will position endeavors and artifacts somewhere along a continuum between the two extremes.
Figure 1: Phenomenological approach to characterizing endeavors
I won’t start into operationalized definitions here, because to some extent that would defeat the purpose of capturing a top of mind response to the statements themselves. So what do you think: top of mind, as you experience these collections of statements, is the essence of like and not like open source represented? Do they raise questions? Controversy? Let me know and we can dive in to where some of these come from (yes, I said my experience-as-experienced, but remember, when I make stuff up the math has to work…there’s lot of great research out there that can help tune these characterizations).
Disambiguation: Because what you don’t know you don’t know probably will hurt you
by MichaelF on August 09, 2006 01:38pm
In part two of two, Sam and Professor Gough continue their conversation focusing on Dynamic Languages and Professor Gough's work with Ruby and .NET.
Part One of this interview, as well as some background on Professor Gough and the LANG.NET symposium can be seen here.
by jcannon on July 07, 2006 02:41pm
It’s only been three months to the day since Port 25 launched. Exactly 73 interviews, posts and tips later we are only just getting started on delivering on our promise of technical and interoperability insight from the lab. The HPC clustering analysis project is well under way and I expect to have an overview post for that project in the next week. I’d like to see much more technical content on the site, but it is taking time to get our lab projects ready for public consumption.
For now, I'm excited to announce some site enhancements to how Port 25 allows you to interact, discuss & debate ideas, as well as - yes - consuming content via podcasting.
I’m interested in your feedback on how much you value the comments features – I’ve heard limited feedback in the last month but it varies from keeping things the same, removing the need for registration, to eliminating comments entirely. What do you think?
We're looking forward to continuing to grow our community with a set of content & usability enhancements over the coming months. For now, I hope these small changes are a good surprise. Have a great weekend.
by jcannon on July 07, 2006 07:43pm
Our first podcast... This week, Sam talks with Fernando Cima from Microsoft Brazil's Security Center of Excellence about the challenges and progress being made in securing and maintaining today's mixed network environments. More specifically, the focus in this discussion is on Server and Domain Isolution. Before Microsoft, Fernando worked for the Brazilian government, as well as with Linux and FreeBSD security projects.
- Download the MP3 Directly - Learn more about Server and Domain Isolation.
Podcast Related Links: - Subscribe to the Port 25 Podcast Feed - Subscribe to Port 25 Podcasts in iTunes