Microsoft's Windows Server chief on Linux, 64-bit computing (part 2)
Bob Muglia talks about the threat posed by Linux and the promise of 64-bit computing.
By Carol Sliwa and Craig Stedman, Computerworld | Computerworld UK | Published: 00:00, 18 May 2004
Bob Muglia, senior vice president of Microsoft's Windows Server division, discusses the road map for future operating system releases, the competitive threat posed by Linux and the promise of 64-bit computing. Part 1 of the interview is available here. Part 2 of the interview follows:
How is work progressing on Longhorn, the next major Windows release, due in 2007?
We are building both a client and server release simultaneously. We'll track the milestones one for one. We will continue to test the server for six to 12 months after the client ships because the server requires long times to go in regression test cycles to assure the availability of the product.
Is there any chance Longhorn server will come out more than 12 months after the client?
It's always possible. I'm trying to create a structure where customers can have some expectations, and yet ultimately we'll ship it when it's ready to be shipped. What we know for sure is that when we give Windows Server to our beta-test customers and our internal IT organisation, they run multiple regression tests which take six to eight weeks at a shot. Then you've got to take that data back and come out with another one.
You've talked about the three major elements of Longhorn in the past: the WinFS storage model, the Indigo communications technologies for building advanced Web services, and the Avalon graphics subsystem. What are some of the other new features that corporate users will find useful?
Dynamic partitioning is an interesting discussion. Typically, these are the Itanium-based systems, these 16- to 64-way systems that are basically mainframes that we run on. We don't support dynamic partitioning within those environments. Today dynamic partitioning means that within an OS image, you can get resources to environments or swap resources out. So basically, you could have a 32-way system; if a processor fails, the operating system can notice that a processor fails and dynamically swap in a new one to keep the application running without having any downtime at all. It's a substantive advantage, from a high-availability perspective.
There's a new scripting engine, code-named Monad. It's a completely managed code environment, and it is fully backward-compatible with existing commands on Windows servers. But Monad is really designed to improve the way an administrator can create scripts that pass information from one command to another. It uses XML as a mechanism for transmitting information between commands.
What are the areas of focus for the interim Windows Server release, code-named R2, which is due in the second half of 2005?
Improved information access, secure access, really from anywhere. One of the things we're looking at is how we can allow access from the Internet to services within a corporate intranet without VPN-ing in. So things like Terminal Server and file server access, we're looking at providing that through firewalls.
Another area of investment is federation, enabling companies to take their Active Directory information and share resources that are within that company to somebody that exists in another company without having to have duplicate passwords.
We're doing an initial implementation of network defence, sometimes known as quarantine services, that allow companies to make sure that when a machine is entering their network either through a VPN or because it's just coming onto the LAN for the first time or re-entering the LAN, that machine is up to spec in terms of both virus software and patches before it's allowed to access the network.
We're taking some things that have shipped separately, like the rights management services and SharePoint, and including those inside R2. We are also including [Visual Studio .Net development tools code-named] Whidbey and the Whidbey environment in R2 as well.
Another big area of investment is the branch environment. Basically, what we're doing here is focusing on investing in technology that enables people to install a Windows server and treat it more like a cache of information, essentially an accelerator for the branch, to improve the user experience and yet keep the administrative costs really low. The assumption is that these branch offices do not have people who are capable of backing up servers, etc. The initial focus of this is really on caching services for files so that we can have local file shares on those branch offices and yet have them completely backed up and maintained from a central location and dramatically improve the speed of communications between those two.
I'm going to the teams, and each one of them has a set of criteria about what I think we can get into R2. Unlike something like Longhorn, where you have some key things you're trying to accomplish and you hold the product until they're all done, this is a case where we're really going to focus on trying to get the thing done and out the door so we can get that value to our customers.
Is Microsoft releasing R2 cognisant that it needs an update for its Software Assurance maintenance program customers?
Sure. If you decide your major releases are four years apart on average and your Software Assurance customers are on three-year cycles, it's probably a good idea to have something to deliver value in between.
Do you think Windows Server 2003 customers will migrate to R2?
I think that a lot of customers will. I think a very small number of servers will upgrade. The typical pattern for customers is they have a new application, they buy a new server for it. They don't do as much upgrading of existing servers. What we know about Windows Server is that the majority of sales go into new machines, and I think that will continue.
How much of a change does the R2 plan represent from what you had previously?
It's pretty substantive. I don't think we've had a minor release [since Windows] 3.51, so you've got to go back a fairly long way for the last time we've done this. These update releases are really like point releases. They've been around since the beginning of time. The idea is not requiring a major amount of disruption for our customers. We've been doing feature packs, which is useful. But what we've found from our customers is that it's easier for them to consume it if we bring all those sorts of things together.
Will you do away with feature packs?
Not entirely. But we're going to try and minimise them in the future. Our goal is to, as much as possible, incorporate them into either a major release or an update.
Will customers be able to pick and choose the feature packs they want to use?
Sure. They can turn on what they want
Will anything have to be cut from Longhorn to help you make the 2007 projected release date?
There is a set of scenarios at a very, very high level that people have been thinking about for Longhorn server. And some of those scenarios are being cut back in some ways. But it's not so much we're taking massive pieces of functionality out of the system. If you looked at the way people would use Longhorn server, some of the scenarios have been cut back. So people might have, for example, had some dreams about how Longhorn server could solve some branch scenarios. We'll probably constrain the amount of solution we get in the branch to being a fairly minimal set of things on top of what we do with R2.
Will WinFS be scaled back?
WinFS is absolutely in Longhorn server, and we expect that people using Terminal Server and other things will use WinFS. We may not be at the state when Longhorn server ships that WinFS can be used for collaborative services by hundreds and hundreds of users. There may be some cases where thinking about the scale aspects of a server and the scale aspects of the server file system, some of those things might need to wait until post-Longhorn for it to happen. But one of the great things about this regular release cycle is we have another opportunity to get that value to customers, even after Longhorn server ships.
What scenario will be supported in Longhorn? Tens of users?
Tens of users, we'll have no problem scaling to, certainly. It's just a question of getting the software fully optimised for that server environment. What it's all about, really, is Longhorn is one of those releases that, especially in terms of internal at Microsoft, a lot of people have very high expectations for what we're going to do with it. And the release is transitioning from its early stages to being something that's very real. I think whenever you have that transition, features get cut along the way and scenarios get cut. This is a natural maturing process of any big release.
Will you have to change the Windows Server product schedule to comply with the recent European Union ruling?
We don't really see any impact on our schedule at this point associated with the EU. The sorts of directions that we've heard from the EU for the server organisation has to do with the publishing of protocols, which we're already well on a path to doing. We agreed to do some of that as a part of the Sun agreement that we recently did.
What sorts of protocols?
With the consent decree that the DOJ had, we agreed to publish all of the protocols between the Windows client and the Windows server. The direction we're now heading is publishing of server-to-server protocols.
Is there any chance Microsoft will ever completely rewrite the Windows code base to address long-standing security vulnerabilities?
What we've found is that these things are small changes to many places. So it's typically one line of code to change. But sometimes there are thousands of places where those sorts of changes need to be applied. Our research group has done an amazing job of building tools that help to find vulnerabilities in code.
Like anything else, there are times when you do major rewrites. And in fact, we're looking at what it would mean now. But it's not because of the security stuff. It would just be because the hardware environment has changed so substantially. Think forward 10 years to a world where IPv6 is ubiquitous. You've got literally trillions and zillions of IP addresses available. Think about a world where perhaps each of these components can be very distributed. That's the kind of environment we're thinking about, where computers are much more distributed than they are today.