“Good Lord - I've heard about this - cat juggling! Stop! Stop! Stop it! Stop it! Stop it! How could there be a god that would let this happen?” -- Steve Martin, in "The Jerk"
Sometimes, that's what it has felt like putting the conference schedule together for Boston's June 22-24 SPTechCon. One speaker finds he can no longer make it, and you try to move another into that slot, but there's a conflict with a flight out, so that won't work. You end up having to move around six speakers to accommodate the one move, and no matter what you try, you end getting scratched or bitten. I believe planning out the architecture for an enterprise-wide SharePoint implementation is less complicated than this. (Probably not, but you see where I'm coming from!)
It all works out in the end, though, and I'm excited to say we've added some outstanding new sessions, which I'll be highlighting in the coming days. Three will be presented by Microsoft technical directors and a fourth brings back one of the most popular sessions from January's SPTechCon in Burlingame.
As if that's not reason enough to register right now, an early-bird discount expires Friday, May 8. That's today. Sign up for the conference now and save $330 off the price.
-- David
Friday, May 8, 2009
Conference planning is like cat juggling
Tuesday, August 12, 2008
Microsoft Cloud May Exceed 200k Servers
A video produced by Microsoft’s Environmental Sustainability group may have revealed how much weight the company is throwing behind its cloud computing initiative.
Long Zheng posted an item on his
I Started Something blog that allegedly disclosed Microsoft’s data center numbers and corresponding power consumption. With an eagle eye, Zheng took screen captures from the video of a status console that listed Microsoft’s servers from Amsterdam to Tukwila, Washington.
If Zheng’s report is accurate, Microsoft was running 15 data centers worldwide hosting 148,357 servers on 17,406 racks and consuming 72,500KW of power as of January. Roughly half of those servers were churning for Live Search, followed by Hotmail, and “other.”
Reports indicate that Microsoft is adding servers at a rate of 10,000 per month; the company may by now have installed its 200,000th server. That’s one big cloud.
-- David Worthington
Long Zheng posted an item on his
I Started Something blog that allegedly disclosed Microsoft’s data center numbers and corresponding power consumption. With an eagle eye, Zheng took screen captures from the video of a status console that listed Microsoft’s servers from Amsterdam to Tukwila, Washington.
If Zheng’s report is accurate, Microsoft was running 15 data centers worldwide hosting 148,357 servers on 17,406 racks and consuming 72,500KW of power as of January. Roughly half of those servers were churning for Live Search, followed by Hotmail, and “other.”
Reports indicate that Microsoft is adding servers at a rate of 10,000 per month; the company may by now have installed its 200,000th server. That’s one big cloud.
-- David Worthington
Tuesday, August 5, 2008
More on Midori: Security
The final piece of SD Times reporter David Worthington's exclusive, groundbreaking coverage of Microsoft's "Midori" operating system has been posted on sdtimes.com. The focus is security, and looks at how Microsoft is using memory access control to protect against privilege elevation attacks. Worthington was able to examine internal Microsoft documents on the operating system, which is conceived for a post-Windows world of high connectivity and cloud computing. The first part of his coverage looked at the technical features of Midori, while the second piece discusses a migration path from Windows platforms to the new Midori OS. There is great depth to the reporting, as SD Times is the only news organization to have seen the documentation. In this world of Internet journalism, every site has a Midori story up, but SD Times is the only site to have first-hand, in-depth knowledge of the project. It's fascinating reading.
-- David Rubinstein
http://www.sdtimes.com/link/32662
-- David Rubinstein
http://www.sdtimes.com/link/32662
Friday, August 1, 2008
Rackspace IPO Will Be a Good Test
Next Friday will be the first day that Rackspace shows up on the New York Stock Exchange. The company is expecting to raise around US$ 2 billion in its IPO, and most of that money is likely to be spent on expanding the company's already massive datacenters. The Rackspace IPO will be a very interesting bellwether for this troubled economy, and depending on how the stock ends its day next Friday, we'll see a great indicator for how the market feels about technology.
When VMware went public last year, it was the darling of tech. Today, it's still flying high, with 90 percent stakeholder EMC turning in big numbers. But the VMware IPO came in a very different economy. Today, we have crumbling housing markets, an oil price crisis, escalating inflation and a country on the verge of a nervous breakdown. Does that bode well for launching a tech company into the stock market? The truth is, no one really knows.
There are many ways the IPO could go. With share prices expected to be around $16, the company is not entering the game like Google or Amazon did back in the day. Rackspace's share prices probably won't inflate into triple digits, as tech stocks were wont to do in the late 90's. On the other side of the coin, Rackspace won't be using this newfound capital to install swimming pools and golden parachutes: the management of the company is only taking around 300,000 shares. That's because Rackspace has repeatedly reminded Wall Street that it's only coming to town for the investment, not the fame and fortune.
With Rackspace having built around $500 million in yearly revenues on just over $40 million worth of investment, it's exciting to think what the company could accomplish with $2 billion under its belt. It is very likely that Rackspace could become the name most synonymous with server hosting, just as Google has done with search and VMware has done with virtualization.
With all that dough coming in, one thing is for sure: whoever is selling servers and switches to Rackspace is going to be very happy in the coming months.
-- Alex Handy
When VMware went public last year, it was the darling of tech. Today, it's still flying high, with 90 percent stakeholder EMC turning in big numbers. But the VMware IPO came in a very different economy. Today, we have crumbling housing markets, an oil price crisis, escalating inflation and a country on the verge of a nervous breakdown. Does that bode well for launching a tech company into the stock market? The truth is, no one really knows.
There are many ways the IPO could go. With share prices expected to be around $16, the company is not entering the game like Google or Amazon did back in the day. Rackspace's share prices probably won't inflate into triple digits, as tech stocks were wont to do in the late 90's. On the other side of the coin, Rackspace won't be using this newfound capital to install swimming pools and golden parachutes: the management of the company is only taking around 300,000 shares. That's because Rackspace has repeatedly reminded Wall Street that it's only coming to town for the investment, not the fame and fortune.
With Rackspace having built around $500 million in yearly revenues on just over $40 million worth of investment, it's exciting to think what the company could accomplish with $2 billion under its belt. It is very likely that Rackspace could become the name most synonymous with server hosting, just as Google has done with search and VMware has done with virtualization.
With all that dough coming in, one thing is for sure: whoever is selling servers and switches to Rackspace is going to be very happy in the coming months.
-- Alex Handy
Thursday, July 31, 2008
Outages Could Give Amazon an Edge
Amazon.com’s nearly eight-hour outage to its S3 cloud storage service on Sunday, July 20, might have irked some customers and made headlines, but in the end, it may be Amazon having the upper hand.
Analyst James Staten told Systems Management News that outages and issues with cloud computing should be expected because it is in its initial phases.
Staten said the cloud computing growing pain period may take several years, but there likely won’t be an exact point in time where all issues with cloud computing are cured. Maturity will happen on a company-by-company basis, he indicated.
“Amazon, being the first in the market, has the biggest target on their head,” Staten said. “They’ve had the biggest outages, and they’ve been working to address it. I expect they’ll be extremely resilient to these kinds of outages in the next year to year-and-a-half.”
As such, Amazon may have an advantage over other companies in the cloud computing market because they are taking their bumps and bruises right now. They will experience their outages, muck through their cloud computing initiation phase, and should learn from their mistakes to get a better feel for this whole cloud computing idea. When cloud computing becomes everywhere, Amazon will be way ahead of everyone in cloud maturity.
The key for Amazon is to keep making sure that these outages get smaller and smaller as time goes by, and their “emergency” response gets better and better. Smaller companies that are new to the cloud computing market, like Joyent and GridLayer, will also have their own bumps along the way, but don’t have the same exposure and “probably won’t end up in the New York Times,” as Staten said.
However, as the bigger corporations start partaking in cloud initiatives, they could well find themselves playing catch-up with Amazon.
-- Jeff Feinman
Analyst James Staten told Systems Management News that outages and issues with cloud computing should be expected because it is in its initial phases.
Staten said the cloud computing growing pain period may take several years, but there likely won’t be an exact point in time where all issues with cloud computing are cured. Maturity will happen on a company-by-company basis, he indicated.
“Amazon, being the first in the market, has the biggest target on their head,” Staten said. “They’ve had the biggest outages, and they’ve been working to address it. I expect they’ll be extremely resilient to these kinds of outages in the next year to year-and-a-half.”
As such, Amazon may have an advantage over other companies in the cloud computing market because they are taking their bumps and bruises right now. They will experience their outages, muck through their cloud computing initiation phase, and should learn from their mistakes to get a better feel for this whole cloud computing idea. When cloud computing becomes everywhere, Amazon will be way ahead of everyone in cloud maturity.
The key for Amazon is to keep making sure that these outages get smaller and smaller as time goes by, and their “emergency” response gets better and better. Smaller companies that are new to the cloud computing market, like Joyent and GridLayer, will also have their own bumps along the way, but don’t have the same exposure and “probably won’t end up in the New York Times,” as Staten said.
However, as the bigger corporations start partaking in cloud initiatives, they could well find themselves playing catch-up with Amazon.
-- Jeff Feinman
Tuesday, July 29, 2008
Microsoft's Plans for Post-Windows OS Revealed
Life without Windows? Apparently, even Microsoft can conceive of such a time and place, and SD Times reporter David Worthington got a look at the company's plans to develop an operating system, code-named Midori, for the massively connected, high speed, powerful computing world in which we now live. The plans are detailed in three article; the first -- Microsoft's Plans for a Post-Windows Operating System, is up on the site. The others, which address migrating from the legacy OS to Midori and Microsoft's attention to heightened security, will be posted soon.
-- David Rubinstein
-- David Rubinstein
Thursday, July 24, 2008
Spammer Slips Out of Slammer
The U.S. Attorney's Office for the District of Colorado said that Edward Davidson, known as the "spam king," has escaped from a minimum security federal prison camp. Davidson was serving a 21-month term for sending out large volumes of spam designed to mislead recipients into handing over their information and money. He was convicted of tax evasion and falsifying information in e-mail pitches for "penny" stocks.
Apparently, he lasted only two months in prison before escaping. Prison guards said he escaped when his wife was leaving the prison after visiting him. He somehow made a run for it and drove off with his wife in their car. He has been in “escape” status since Sunday.
-- Michelle Savage
Apparently, he lasted only two months in prison before escaping. Prison guards said he escaped when his wife was leaving the prison after visiting him. He somehow made a run for it and drove off with his wife in their car. He has been in “escape” status since Sunday.
-- Michelle Savage
Wednesday, July 23, 2008
Is Google Digg-ing for Gold?
Rumors that Google is close to acquiring social voting site Digg have resurfaced, with multiple sources hinting that the companies are close to signing a deal.
The latest buzz is that a letter of intent was signed on a deal that is worth about $200 million. But are the rumors true? Bloggers and analysts have been wagging their tongues about Google buying Digg for over a year. And today’s rumors are based on several unnamed Google insiders—neither Google nor Digg has confirmed the deal.
At this point, there’s little more to do than wait for more concrete evidence that the deal will go down. And, for fun, we can speculate on what Google will do with Digg. The combination of Digg and Google News would be a nifty mix, but any efforts made by Google to combine it with online advertising may be subject to a huge Digg community can of whup-ass.
--Michelle Savage
The latest buzz is that a letter of intent was signed on a deal that is worth about $200 million. But are the rumors true? Bloggers and analysts have been wagging their tongues about Google buying Digg for over a year. And today’s rumors are based on several unnamed Google insiders—neither Google nor Digg has confirmed the deal.
At this point, there’s little more to do than wait for more concrete evidence that the deal will go down. And, for fun, we can speculate on what Google will do with Digg. The combination of Digg and Google News would be a nifty mix, but any efforts made by Google to combine it with online advertising may be subject to a huge Digg community can of whup-ass.
--Michelle Savage
Monday, July 21, 2008
Defcon 2
Every year at about this time, we hear about the amazing new exploits and tools that will be shown off at Black Hat. To a lesser extent, there’s discussion of what will be shown at Defcon, though, typically, that show tends to be 15 presentations on how to use Wireshark mixed with political talks about copyright and legal hacking. In years past, we’ve seen Joanna Rutkowska’s introduction of the red pill and blue pill (vitrualization as trojan platform), Greg Hoglund show off his World of Warcraft attacks, and H.D. Moore discussing Metasploit’s many uses.
Despite the illustrious past of Black Hat and Defcon, this year’s show is shaping up to be one of the most dangerous ever. Between Rutkowska’s updated pills, Dan Kaminsky’s much ballyhooed DNS attacks, and the recent revelation that Kris Kaspersky will be unveiling processor-based attacks sometime in October, this should be one of the most eventful falls in computer security since the Legion of Doom first banged on virtual doors back in the 1980s.
Add to all of this the fact that Firefox 3 just arrived, making it a juicy target for hackers, and that almost every DNS server in the world has been patched within the last month, and you’ve got a recipe for the perfect storm. This fall, there really won’t be anyplace to hide. With the proper application of patches and security policies, it’s entirely possible to avoid all this strife, but the toughest part of staying up to date is keeping on top of the ever changing scene of exploitation. And with this August looking to be rife with new exploits, we’re all in for one hell of a ride.
-- Alex Handy
Despite the illustrious past of Black Hat and Defcon, this year’s show is shaping up to be one of the most dangerous ever. Between Rutkowska’s updated pills, Dan Kaminsky’s much ballyhooed DNS attacks, and the recent revelation that Kris Kaspersky will be unveiling processor-based attacks sometime in October, this should be one of the most eventful falls in computer security since the Legion of Doom first banged on virtual doors back in the 1980s.
Add to all of this the fact that Firefox 3 just arrived, making it a juicy target for hackers, and that almost every DNS server in the world has been patched within the last month, and you’ve got a recipe for the perfect storm. This fall, there really won’t be anyplace to hide. With the proper application of patches and security policies, it’s entirely possible to avoid all this strife, but the toughest part of staying up to date is keeping on top of the ever changing scene of exploitation. And with this August looking to be rife with new exploits, we’re all in for one hell of a ride.
-- Alex Handy
Google Trumps Microsoft as UK's Top Brand
Score One for Google!
The 2008 Superbrand survey lists Google as the U.K.’s top brand for the first time, bumping Microsoft—last year’s winner—to second place.
Superbrands Top Ten
1. Google
2. Microsoft
3. Mercedes-Benz
4. BBC
5. British Airways
6. Royal Doulton
7. BMW
8. Bosch
9. Nike
10. Sony
Apple came in at #11.
-- Michelle Savage
The 2008 Superbrand survey lists Google as the U.K.’s top brand for the first time, bumping Microsoft—last year’s winner—to second place.
Superbrands Top Ten
1. Google
2. Microsoft
3. Mercedes-Benz
4. BBC
5. British Airways
6. Royal Doulton
7. BMW
8. Bosch
9. Nike
10. Sony
Apple came in at #11.
-- Michelle Savage
Thursday, July 17, 2008
Apple Wants to Shut Psystar Down
Apple filed a 16-page lawsuit in federal court demanding that Psystar Corporation, a small computer maker marketing Intel-based systems with Mac OS X preinstalled, recall all the systems it has sold. Why? Because Apple said that Psystar violated numerous copyright, trademark, breach-of-contract and unfair competition laws when they preinstalled Mac OS X 10.5 (Leopard) on the desktop and server systems they sell (called Open Computer and OpenServ).
“Apple has never authorized Psystar to install, use or sell the Mac OS software on any non-Apple-labeled hardware,” the filing said.
Apple further demanded that Psystar hand over all profits made from selling computers with Leopard, and stop selling the systems immediately. Ouch.
-- Michelle Savage
“Apple has never authorized Psystar to install, use or sell the Mac OS software on any non-Apple-labeled hardware,” the filing said.
Apple further demanded that Psystar hand over all profits made from selling computers with Leopard, and stop selling the systems immediately. Ouch.
-- Michelle Savage
Live Mesh Moves from Private to Public Beta
Microsoft yesterday made its software plus services platform, Live Mesh, available to anyone in the United States with a Windows Live ID—no invitation required.
The Live Mesh service allows users to share data among multiple Windows computers, and via the Internet. Users add documents to the “mesh,” which is an online storage facility, and then access them from another computer online. Live Mesh is an example of Microsoft’s Software Plus Services strategy, which combines on-premise software with cloud computing technology.
Since April, Microsoft has reserved Live Mesh registrations for those with an invitation, but now it’s open to anyone who wants to use it in its early stages. Microsoft said there is no waiting list at this time but will restrict the number of public beta testers.
-- Michelle Savage
The Live Mesh service allows users to share data among multiple Windows computers, and via the Internet. Users add documents to the “mesh,” which is an online storage facility, and then access them from another computer online. Live Mesh is an example of Microsoft’s Software Plus Services strategy, which combines on-premise software with cloud computing technology.
Since April, Microsoft has reserved Live Mesh registrations for those with an invitation, but now it’s open to anyone who wants to use it in its early stages. Microsoft said there is no waiting list at this time but will restrict the number of public beta testers.
-- Michelle Savage
Wednesday, July 16, 2008
Sys Admin Gone Wild
An IT network administrator working for the city of San Francisco was jailed for locking up a multimillion-dollar city computer system that handles sensitive data, and he is now holding the password hostage.
San Francisco police arrested Terry Childs, an employee of the city’s Department of Technology, for improperly tampering with computer systems and causing a denial of service. Now he is the only one who can get into the network. He also set up devices to gain unauthorized access to the system.
Police believe Childs set up a secret password, giving him exclusive access to the city’s new FiberWAN (wide area network), which includes city payroll and law enforcement records. On Sunday, he was arrested and charged with four counts of tampering with a city-owned computer network. Over the course of the past few days, he has given police fake passwords and refuses to give up the real one.
No one knows exactly why Childs locked up the system. However, the kicker on this story is that San Francisco is continuing to pay his $126,000 annual salary, although it is planning to decide whether he will be placed on “unpaid leave” this week. Hmmmm…..Does jail time count as unpaid leave?
-- Michelle Savage
San Francisco police arrested Terry Childs, an employee of the city’s Department of Technology, for improperly tampering with computer systems and causing a denial of service. Now he is the only one who can get into the network. He also set up devices to gain unauthorized access to the system.
Police believe Childs set up a secret password, giving him exclusive access to the city’s new FiberWAN (wide area network), which includes city payroll and law enforcement records. On Sunday, he was arrested and charged with four counts of tampering with a city-owned computer network. Over the course of the past few days, he has given police fake passwords and refuses to give up the real one.
No one knows exactly why Childs locked up the system. However, the kicker on this story is that San Francisco is continuing to pay his $126,000 annual salary, although it is planning to decide whether he will be placed on “unpaid leave” this week. Hmmmm…..Does jail time count as unpaid leave?
-- Michelle Savage
Thursday, July 10, 2008
The Anti-Virus Scam
I have a very close friend who relies on me constantly for Windows tech support. Not that I know anything about Windows, or that I like to fix his machine all the time. But as he constantly reminds me, even though I shun Windows and only use the platform for gaming, I still know a lot more about it than he does.
My friend has an iMac that dual boots, thanks to my setting it up that way for him. He really only ever uses the Windows side, and then, only to play Pirates of the Burning Sea, an online role-playing game in which he runs a band of British sailors. My friend is a huge sailing buff, and got out of the army a few years ago, so he's a big fan of being the leader of a squad of other players in the game.
When I first went over to set up his system, I made a point of bookmarking some useful sites for him, after I downloaded Firefox for him. I bookmarked Hulu, Surfthechannel.com, and Youtube so he could watch things online. I bookmarked his bank. And I'm not ashamed to admit that I had a few favorite porn sites in there too. Knowing what a meathead my buddy is, I figured I'd better show him how to find decent porn, and not the sort that demands credit cards or infects your systems.
My bachelor friend was coasting along fine for a while. His graphics card drivers kept going bad, but a quick reinstall of those made everything OK. Then, I got the dreaded phone call.
“I think I have a virus or something. It keeps telling me I am infected.”
After fruitlessly attempting to walk him through a few first solutions, I had to go over to his apartment in the Haight to fix the problem.
What happened? Despite my bookmarking a very simple-to-use porn site, my numbskull friend had clicked on an ad along the right side of the site, where it clearly states “Our Advertisers.” I'm sure one of the ads told him “someone in San Francisco wants to have sex with you!” and he dutifully clicked, hoping for some kind of free love.
The end result was that he downloaded an application. An anti-virus application. Or so he thought. The app is called Advanced Anti-Virus, and it's the digital equivalent of a slap in the face; each time he boots, this horrible program tells him he's infected and he needs to use the program to disinfect. When he runs the “disinfect,” another window comes up asking for a credit card number and some personal information. It says he needs to buy the “Pro” version, which is another way of saying he needs to send his credit card info to some awful scammers in Malaysia.
I looked up the company behind this application. The only thing I could find was a domain registration under the name Cindy Chan, with the following phone number: +1-415-1234567
My friend is now looking into ways to track down these people, knock on their door, and confront them. I am quite inclined to help him in this endeavour, as I think it could be a good business model.
I'm certainly not in favor of the death penalty for bloggers, virus writers and such, as Iran is now proposing.
But I am absolutely in favor of stopping everyone associated with Advanced Anti-Virus. They're not experimenting like a virus writer. They're not political prisoners, or researchers trying to help the world. They're just a bunch of Internet thugs, and they deserve swift and painful justice.
-- Alex Handy
My friend has an iMac that dual boots, thanks to my setting it up that way for him. He really only ever uses the Windows side, and then, only to play Pirates of the Burning Sea, an online role-playing game in which he runs a band of British sailors. My friend is a huge sailing buff, and got out of the army a few years ago, so he's a big fan of being the leader of a squad of other players in the game.
When I first went over to set up his system, I made a point of bookmarking some useful sites for him, after I downloaded Firefox for him. I bookmarked Hulu, Surfthechannel.com, and Youtube so he could watch things online. I bookmarked his bank. And I'm not ashamed to admit that I had a few favorite porn sites in there too. Knowing what a meathead my buddy is, I figured I'd better show him how to find decent porn, and not the sort that demands credit cards or infects your systems.
My bachelor friend was coasting along fine for a while. His graphics card drivers kept going bad, but a quick reinstall of those made everything OK. Then, I got the dreaded phone call.
“I think I have a virus or something. It keeps telling me I am infected.”
After fruitlessly attempting to walk him through a few first solutions, I had to go over to his apartment in the Haight to fix the problem.
What happened? Despite my bookmarking a very simple-to-use porn site, my numbskull friend had clicked on an ad along the right side of the site, where it clearly states “Our Advertisers.” I'm sure one of the ads told him “someone in San Francisco wants to have sex with you!” and he dutifully clicked, hoping for some kind of free love.
The end result was that he downloaded an application. An anti-virus application. Or so he thought. The app is called Advanced Anti-Virus, and it's the digital equivalent of a slap in the face; each time he boots, this horrible program tells him he's infected and he needs to use the program to disinfect. When he runs the “disinfect,” another window comes up asking for a credit card number and some personal information. It says he needs to buy the “Pro” version, which is another way of saying he needs to send his credit card info to some awful scammers in Malaysia.
I looked up the company behind this application. The only thing I could find was a domain registration under the name Cindy Chan, with the following phone number: +1-415-1234567
My friend is now looking into ways to track down these people, knock on their door, and confront them. I am quite inclined to help him in this endeavour, as I think it could be a good business model.
I'm certainly not in favor of the death penalty for bloggers, virus writers and such, as Iran is now proposing.
But I am absolutely in favor of stopping everyone associated with Advanced Anti-Virus. They're not experimenting like a virus writer. They're not political prisoners, or researchers trying to help the world. They're just a bunch of Internet thugs, and they deserve swift and painful justice.
-- Alex Handy
Yahoo Says You're the BOSS
Yahoo has launched its Yahoo! Search BOSS (Build your Own Search Service) platform, giving third-party developers and companies a way to create their own Web search engine, using Yahoo’s search infrastructure and technology. Yahoo would require the developers to run their ads along with the search result pages generated through this service.
This seems like an obvious attempt to extend Yahoo’s reach on the Web, and nab a little bit of the market share from Google. BOSS looks a lot like a Google tool that allows Web sites to customize their search engine to deliver results that are more relevant to their users.
-- Michelle Savage
This seems like an obvious attempt to extend Yahoo’s reach on the Web, and nab a little bit of the market share from Google. BOSS looks a lot like a Google tool that allows Web sites to customize their search engine to deliver results that are more relevant to their users.
-- Michelle Savage
Tuesday, July 8, 2008
Angry customers
In the world of journalism, we don't usually talk about customers. We mention subscribers, readers, letter writers and the occasional angry flame sender. But customers are rarely mentioned, primarily because we don't think of our readers as customers. So much of the publishing industry is about building a community of readers, of people interested in the same subject.
But today, something happened that got me thinking about customers from another perspective. I recently wrote, as a freelancer, a piece for a local metropolitan magazine. The piece focused on a local art scene and its use of recycled materials. I interviewed three local artists and told their stories. One of these artists made tables out of salvaged wood.
Now, I write a lot of things every day. Once I've finished something, I generally remove it from my mind, to the point that, unless there is a byline, I sometimes cannot even remember if I wrote the piece I am reading. I finished this art piece about three months ago, so I forgot most of what I put in it.
So when I received an angry e-mail from the table maker today, my first reaction was to expect that I'd goofed up somewhere along the line.
The e-mail accused my article of identifying the table makers as furniture recyclers, and associating them with a local salvage shop. I'm sure there have been far more egregious errors in the history of western civilization, but to this husband and wife table team, this was tantamount to murdering their first born.
I decided not to respond just yet, to wait to hear from my editor. When my editor did get in touch with me, I apologized for the mistake immediately, saying that I thought I'd said they used salvaged wood, not salvaged furniture to make their tables.
My editor responded by telling me that, in fact, the piece did say salvaged wood, and was 100 percent factually correct. What the table makers were upset about was the picture the magazine's photographer had taken of them outside a local salvage shop, with which they had no affiliation.
These people stood out there for the pictures and said nothing about how inappropriate the setting was for the image.
And yet, here I've spent all morning fretting over an error that wasn't my fault. I suddenly understood exactly what it feels like to be the head of IT support when an executive comes rushing down to scream about a lack of a floppy drive in his machine.
-- Alex Handy
But today, something happened that got me thinking about customers from another perspective. I recently wrote, as a freelancer, a piece for a local metropolitan magazine. The piece focused on a local art scene and its use of recycled materials. I interviewed three local artists and told their stories. One of these artists made tables out of salvaged wood.
Now, I write a lot of things every day. Once I've finished something, I generally remove it from my mind, to the point that, unless there is a byline, I sometimes cannot even remember if I wrote the piece I am reading. I finished this art piece about three months ago, so I forgot most of what I put in it.
So when I received an angry e-mail from the table maker today, my first reaction was to expect that I'd goofed up somewhere along the line.
The e-mail accused my article of identifying the table makers as furniture recyclers, and associating them with a local salvage shop. I'm sure there have been far more egregious errors in the history of western civilization, but to this husband and wife table team, this was tantamount to murdering their first born.
I decided not to respond just yet, to wait to hear from my editor. When my editor did get in touch with me, I apologized for the mistake immediately, saying that I thought I'd said they used salvaged wood, not salvaged furniture to make their tables.
My editor responded by telling me that, in fact, the piece did say salvaged wood, and was 100 percent factually correct. What the table makers were upset about was the picture the magazine's photographer had taken of them outside a local salvage shop, with which they had no affiliation.
These people stood out there for the pictures and said nothing about how inappropriate the setting was for the image.
And yet, here I've spent all morning fretting over an error that wasn't my fault. I suddenly understood exactly what it feels like to be the head of IT support when an executive comes rushing down to scream about a lack of a floppy drive in his machine.
-- Alex Handy
A Digg-y-back Ride
My colleague Alex Handy has written an interesting look at Digg, the news site on which readers decide the stories that gain prominence. Well, the people who have Digg-ed (dugg?) Alex's story on Digg must have noticed that it was a pretty popular read, because in the comments section, they are posting links to other articles -- some taking a contrarian point of view, and others that have little or nothing to do with the topic itself.
Let it now be known forever that the practice of riding the popularity of a story on Digg to promote another story shall be known as "Diggy-backing."
-- David Rubinstein
Let it now be known forever that the practice of riding the popularity of a story on Digg to promote another story shall be known as "Diggy-backing."
-- David Rubinstein
Monday, July 7, 2008
Microsoft Backs Icahn's Call for New Yahoo Board
Just when we thought the dust had settled, the Microsoft-Yahoo drama continues.
A lot happened today.
Billionaire investor Carl Icahn revealed that he is in talks with Microsoft about the potential acquisition of Yahoo if its current board is ousted at the upcoming Aug. 1 annual meeting.
Microsoft released a statement supporting Icahn's effort to unseat Yahoo!'s board and replace CEO Jerry Yang.
And Yahoo retaliated with a statement of its own, saying that it strongly opposes the Microsoft—Icahn plan of action.
In Icahn’s letter, he said that he and Microsoft chief Steve Ballmer have met several times to discuss "a transaction to purchase the Search function with large financial guarantees or, in the alternative, the whole company." By replacing the current board with members who are open to negotiations with Microsoft, Icahn said the Microsoft deal would move along smoothly, as it would prevent Yahoo CEO Jerry Yang from being able to "botch up" future negotiations.
Microsoft confirmed that the deal could move forward if the board is replaced. “We confirm, however, that after the shareholder election Microsoft would be interested in discussing with a new board a major transaction with Yahoo!, such as either a transaction to purchase the “Search” function with large financial guarantees or, in the alternative, purchasing the whole company,” said the company in a statement.
Yahoo said today: "Mr. Ballmer and Mr. Icahn have teamed up in an apparent effort to force Yahoo! into selling to Microsoft its search business at a price to be determined in a future 'negotiation' between Mr. Icahn's directors and Microsoft's management. We feel very strongly that this would not lead to an outcome that would be in the best interests of Yahoo!'s stockholders. If Microsoft and Mr. Ballmer really want to purchase Yahoo!, we again invite them to make a proposal immediately. And if Mr. Icahn has an actual plan for Yahoo! beyond hoping that Microsoft might actually consummate a deal which they have repeatedly walked away from, we would be very interested in hearing it."
Microsoft said it was "premature" to discuss details of any future negotiation for Yahoo.
-- Michelle Savage
A lot happened today.
Billionaire investor Carl Icahn revealed that he is in talks with Microsoft about the potential acquisition of Yahoo if its current board is ousted at the upcoming Aug. 1 annual meeting.
Microsoft released a statement supporting Icahn's effort to unseat Yahoo!'s board and replace CEO Jerry Yang.
And Yahoo retaliated with a statement of its own, saying that it strongly opposes the Microsoft—Icahn plan of action.
In Icahn’s letter, he said that he and Microsoft chief Steve Ballmer have met several times to discuss "a transaction to purchase the Search function with large financial guarantees or, in the alternative, the whole company." By replacing the current board with members who are open to negotiations with Microsoft, Icahn said the Microsoft deal would move along smoothly, as it would prevent Yahoo CEO Jerry Yang from being able to "botch up" future negotiations.
Microsoft confirmed that the deal could move forward if the board is replaced. “We confirm, however, that after the shareholder election Microsoft would be interested in discussing with a new board a major transaction with Yahoo!, such as either a transaction to purchase the “Search” function with large financial guarantees or, in the alternative, purchasing the whole company,” said the company in a statement.
Yahoo said today: "Mr. Ballmer and Mr. Icahn have teamed up in an apparent effort to force Yahoo! into selling to Microsoft its search business at a price to be determined in a future 'negotiation' between Mr. Icahn's directors and Microsoft's management. We feel very strongly that this would not lead to an outcome that would be in the best interests of Yahoo!'s stockholders. If Microsoft and Mr. Ballmer really want to purchase Yahoo!, we again invite them to make a proposal immediately. And if Mr. Icahn has an actual plan for Yahoo! beyond hoping that Microsoft might actually consummate a deal which they have repeatedly walked away from, we would be very interested in hearing it."
Microsoft said it was "premature" to discuss details of any future negotiation for Yahoo.
-- Michelle Savage
Tuesday, July 1, 2008
David Caminer, World's First Systems Analyst Dies at 92
David Caminer, who first discovered how use a computer for business purposes, died on June 19 in London at the ripe age of 92.
In 1951, before IBM was even an idea, Caminer was one of the brains behind LEO (short for Lyons Electronic Office), the world’s first business computer, a distinction certified by Guinness World Records. It was 16 feet long with 6,000 valves and could store more than 2,000 words. Yes, this was a big deal back then. In fact, it was a major breakthrough in business practice, and he was promoted to director of LEO computers. New Scientist best summed up this accomplishment: “In today’s terms it would be like hearing that Pizza Hut had developed a new generation of microprocessor, or McDonald’s had invented the Internet.”
As his career advanced in the 1970s, he lived in Luxembourg as project director for the installation of a computer and communications system for the European Community.
Caminer was widely respected as a pioneer of business computing and will forever be remembered as the world’s first systems analyst.
-- Michelle Savage
In 1951, before IBM was even an idea, Caminer was one of the brains behind LEO (short for Lyons Electronic Office), the world’s first business computer, a distinction certified by Guinness World Records. It was 16 feet long with 6,000 valves and could store more than 2,000 words. Yes, this was a big deal back then. In fact, it was a major breakthrough in business practice, and he was promoted to director of LEO computers. New Scientist best summed up this accomplishment: “In today’s terms it would be like hearing that Pizza Hut had developed a new generation of microprocessor, or McDonald’s had invented the Internet.”
As his career advanced in the 1970s, he lived in Luxembourg as project director for the installation of a computer and communications system for the European Community.
Caminer was widely respected as a pioneer of business computing and will forever be remembered as the world’s first systems analyst.
-- Michelle Savage
Wednesday, June 25, 2008
Heads in the Clouds
Last night I attended Cloud Camp, an impromptu conference in San Francisco that focused on cloud computing. The event was thrown together in three weeks and took advantage of a large number of Web admins, developers, movers and shakers being in town for other shows. This was an unconference, a term coined years back at BarCamp, a collaborative get together that was created to show up O’Reilly’s exclusive Foo Camp. That means there were no scheduled talks or keynotes, only a big paper grid, some sharpies, and lots of enthusiastic folks to talk and ask questions.
When the attendees had announced their proposed sessions and placed them in the grid of times and meeting spaces, the 300 or so attendees filed out and went to chat about what exactly cloud computing is. And the resounding conclusion reached by most was that Cloud is the new SOA. And that’s not a good thing.
The first talk I attended was supposed to be about cloud architecture. Hurrah, I thought, let’s hear about how you open an account with Dell and get those servers into the grid 10 minutes after you unbox them. But, no, the talk ended up being a lengthy product pitch, veiled in a thin smear of “what’s in a cloud stack.” It quickly descended into the leader extolling the benefits of a cloud-based markup language used to describe system stacks. Of course, this was the lead engineer behind said markup language, and it was also the primary product of his startup.
Strike one.
Next, I attended a talk on using Ruby in the cloud, though the talk was ostensibly about reaching 1 billion page-views a month. This discussion focused on the success LinkedIn had using Joyent to host its Facebook application. All I got from this discussion, aside from some excellent Ruby speed tips, was the distinct feeling that I’m missing out on the gold rush taking place inside Facebook applications.
Strike two.
The most interesting part of the evening for me wasn’t the talks, though I hear Google’s Kevin Marks actually managed to spark up a good session, and that Amazon’s Web Services guys were there to listen to complaints. My night was capped off by a lengthy discussion with an unabashed, unashamed venture capitalist. We chatted for a long time about where the money could be made in the cloud. His conclusion was that there would eventually be big roles for middle-men. I called them integrators, but he wasn’t so confident in that term.
Foul tip, just down the third base line.
The trouble with the cloud, right now, is that it’s being used to describe a number of different types of systems. There’s the Google-Amazon system, where you build a non-critical application and host it inside the massive grid of computers at these Web companies. That’s what Cloud is supposed to mean. The other cloud, however, is the internal cloud. It’s a term used to describe a massive grid inside a company, where individual applications are provisioned, allocated, and dynamically resized to take advantage of a slice of this big grid. It’s a commodity in the basement that’s squeezed into injection-molded case scenarios.
Hmmm, sounds an awful lot like service-oriented architecture, doesn’t it? SOA can mean internal systems, connecting and chatting like we always wanted them to, but were never able to accomplish. Or, SOA can mean bringing in SaaS and tools from outside and tying them to internal systems. They’re almost exact opposites. But then, they aren’t at all. They just vie for the same resources, attention and standards. Yet making the Subversion server talk to the change management server is almost entirely unlike making Salesforce.com talk to your company’s exchange server.
And yet, they’re very similar. As similar as, say, two clouds. Shapes and forms, speeds and purposes aren’t the real meat of a cloud. The meat is in the viewer. What do you see in that cloud? Oh, Winnie the Pooh! And that one? A rain storm.
If my new VC friend is right, the clouds will soon be filling up with folks who can fill in the mortar between applications, servers and cloud hosts. Not unlike the wildly large ecosystem of SOA tools and products that sprouted up over the last three years, cloud computing will likely become a super buzz word, if it hasn’t already. It’ll be the place where we start to find new standards, new innovations, and new three-letter acronyms.
Let’s just hope that this time, there’s fewer standards involved. The last thing we need right now is a new set of WS*.
-- Alex Handy
When the attendees had announced their proposed sessions and placed them in the grid of times and meeting spaces, the 300 or so attendees filed out and went to chat about what exactly cloud computing is. And the resounding conclusion reached by most was that Cloud is the new SOA. And that’s not a good thing.
The first talk I attended was supposed to be about cloud architecture. Hurrah, I thought, let’s hear about how you open an account with Dell and get those servers into the grid 10 minutes after you unbox them. But, no, the talk ended up being a lengthy product pitch, veiled in a thin smear of “what’s in a cloud stack.” It quickly descended into the leader extolling the benefits of a cloud-based markup language used to describe system stacks. Of course, this was the lead engineer behind said markup language, and it was also the primary product of his startup.
Strike one.
Next, I attended a talk on using Ruby in the cloud, though the talk was ostensibly about reaching 1 billion page-views a month. This discussion focused on the success LinkedIn had using Joyent to host its Facebook application. All I got from this discussion, aside from some excellent Ruby speed tips, was the distinct feeling that I’m missing out on the gold rush taking place inside Facebook applications.
Strike two.
The most interesting part of the evening for me wasn’t the talks, though I hear Google’s Kevin Marks actually managed to spark up a good session, and that Amazon’s Web Services guys were there to listen to complaints. My night was capped off by a lengthy discussion with an unabashed, unashamed venture capitalist. We chatted for a long time about where the money could be made in the cloud. His conclusion was that there would eventually be big roles for middle-men. I called them integrators, but he wasn’t so confident in that term.
Foul tip, just down the third base line.
The trouble with the cloud, right now, is that it’s being used to describe a number of different types of systems. There’s the Google-Amazon system, where you build a non-critical application and host it inside the massive grid of computers at these Web companies. That’s what Cloud is supposed to mean. The other cloud, however, is the internal cloud. It’s a term used to describe a massive grid inside a company, where individual applications are provisioned, allocated, and dynamically resized to take advantage of a slice of this big grid. It’s a commodity in the basement that’s squeezed into injection-molded case scenarios.
Hmmm, sounds an awful lot like service-oriented architecture, doesn’t it? SOA can mean internal systems, connecting and chatting like we always wanted them to, but were never able to accomplish. Or, SOA can mean bringing in SaaS and tools from outside and tying them to internal systems. They’re almost exact opposites. But then, they aren’t at all. They just vie for the same resources, attention and standards. Yet making the Subversion server talk to the change management server is almost entirely unlike making Salesforce.com talk to your company’s exchange server.
And yet, they’re very similar. As similar as, say, two clouds. Shapes and forms, speeds and purposes aren’t the real meat of a cloud. The meat is in the viewer. What do you see in that cloud? Oh, Winnie the Pooh! And that one? A rain storm.
If my new VC friend is right, the clouds will soon be filling up with folks who can fill in the mortar between applications, servers and cloud hosts. Not unlike the wildly large ecosystem of SOA tools and products that sprouted up over the last three years, cloud computing will likely become a super buzz word, if it hasn’t already. It’ll be the place where we start to find new standards, new innovations, and new three-letter acronyms.
Let’s just hope that this time, there’s fewer standards involved. The last thing we need right now is a new set of WS*.
-- Alex Handy
Monday, June 23, 2008
Top 10 Reasons for Continuous Data Protection
At last week's HP confab in Las Vegas, FalconStor executive Peter Eicher gave a talk called "Ten Reasons You Need Continuous Data Protection."
FalconStor sells a solution in this area, and a few of the tips were product-centric, such as the flexibility to use any storage device or protocol you choose. Others, however, were more general in nature and address some issues regarding data backup and recovery.
Continuous data protection gives multiple recovery points, and moves away from the once-a-day practice of backing up data. "It's the single overriding reason" people adopt CDP, Eicher said. But there is the issue of data integrity to consider. Using what Eicher termed "full CDP," users are continually capturing data, so in the event of a disaster, nothing is lost. However, recovery time can be quite long. "Near CDP," he said, allows for snapshots of the data at regular intervals, making recovery quicker, but introducing the possibility of data loss, if something was written to the server between the last snapshot and the failure. "How bad is it if you miss a few transactions? If each order is for a million dollars, you don't want to miss any," he said.
Eicher also spoke about the benefits of server virutalization beyond simple consolidation, and how the technology can aid in backup and recovery. If you're running 10 virtual machines on one physical machine, you can issues at backup of CPU, memory and I/O capacity. FalconStor's approach to CDP lets users back up at the disk level, not the host level, so the impact is greatly reduced. And, from a recovery standpoint, you can have one VM standing in for 100 physical servers, and each can recover boot images from the CDP device. No longer is data recovery a one-to-one deal, Eicher noted.
CDP, he said, also helps organizations get rid of tape at remote offices, where the person in charge of changing tapes is usually not an IT worker, where tapes often get jammed, or can get lost in shipment back to headquarters, or he goes on vacation and no backup is done while he's gone. Using CDP, the data is kept on the box and replicated back to the data center, where it can then be transferred to tape storage.
At the conference, Eicher said he heard a unique use of CDP – one company was doing CDP for virus scanning. "Live scanning slows down the e-mail server a lot," he said. "By taking a snapshot of the e-mail server and running the virus scan against it, there's no impact to the live server. If a virus is found in one mailbox, you go right to it, without having to scan every mailbox. I thought that was a pretty interesting application of CDP."
-- David Rubinstein
FalconStor sells a solution in this area, and a few of the tips were product-centric, such as the flexibility to use any storage device or protocol you choose. Others, however, were more general in nature and address some issues regarding data backup and recovery.
Continuous data protection gives multiple recovery points, and moves away from the once-a-day practice of backing up data. "It's the single overriding reason" people adopt CDP, Eicher said. But there is the issue of data integrity to consider. Using what Eicher termed "full CDP," users are continually capturing data, so in the event of a disaster, nothing is lost. However, recovery time can be quite long. "Near CDP," he said, allows for snapshots of the data at regular intervals, making recovery quicker, but introducing the possibility of data loss, if something was written to the server between the last snapshot and the failure. "How bad is it if you miss a few transactions? If each order is for a million dollars, you don't want to miss any," he said.
Eicher also spoke about the benefits of server virutalization beyond simple consolidation, and how the technology can aid in backup and recovery. If you're running 10 virtual machines on one physical machine, you can issues at backup of CPU, memory and I/O capacity. FalconStor's approach to CDP lets users back up at the disk level, not the host level, so the impact is greatly reduced. And, from a recovery standpoint, you can have one VM standing in for 100 physical servers, and each can recover boot images from the CDP device. No longer is data recovery a one-to-one deal, Eicher noted.
CDP, he said, also helps organizations get rid of tape at remote offices, where the person in charge of changing tapes is usually not an IT worker, where tapes often get jammed, or can get lost in shipment back to headquarters, or he goes on vacation and no backup is done while he's gone. Using CDP, the data is kept on the box and replicated back to the data center, where it can then be transferred to tape storage.
At the conference, Eicher said he heard a unique use of CDP – one company was doing CDP for virus scanning. "Live scanning slows down the e-mail server a lot," he said. "By taking a snapshot of the e-mail server and running the virus scan against it, there's no impact to the live server. If a virus is found in one mailbox, you go right to it, without having to scan every mailbox. I thought that was a pretty interesting application of CDP."
-- David Rubinstein
Friday, June 20, 2008
Meep! Meep! IBM's Roadrunner Most Powerful Supercomputer
The TOP500 list of the world's most powerful supercomputers was released at the International Supercomputing Conference this week. And IBM hogged the top slots. The chipmaker claimed first place. And second. And third.
IBM's "Roadrunner" supercomputer won the title of the world's most powerful supercomputer. The Roadrunner, which is installed at the U.S. Department of Energy's Los Alamos National Laboratory, achieved a peak performance of 1.026 petaFLOPS, running past IBM's BlueGene L and P systems to claim first place.
Roadrunner is a hybrid processor that combines Cell Broadband Engine with AMD's Opteron dual-core processors, making it one of the most energy-efficient on the list.
The former holder of the title, Blue Gene/L at DOE's Argonne National Laboratory, came in second this year with a performance of 478.2 teraFLOPS. IBM also grabbed third place with the Team Blue Gene/P system at the Department of Energy's Argonne National Lab in Chicago.
Also at the top of list were Sun's SunBlade x6420 "Ranger" system at the University of Texas, and the Cray Xt4 "Jaguar" system at Oak Ridge National Lab in Tennessee.
While IBM claimed the top slots, Intel continued to dominate the list, with Intel processors now found in 75 percent of the TOP500 supercomputers, up from 70.8 percent of the 30th list released last year.
-- Michelle Savage
IBM's "Roadrunner" supercomputer won the title of the world's most powerful supercomputer. The Roadrunner, which is installed at the U.S. Department of Energy's Los Alamos National Laboratory, achieved a peak performance of 1.026 petaFLOPS, running past IBM's BlueGene L and P systems to claim first place.
Roadrunner is a hybrid processor that combines Cell Broadband Engine with AMD's Opteron dual-core processors, making it one of the most energy-efficient on the list.
The former holder of the title, Blue Gene/L at DOE's Argonne National Laboratory, came in second this year with a performance of 478.2 teraFLOPS. IBM also grabbed third place with the Team Blue Gene/P system at the Department of Energy's Argonne National Lab in Chicago.
Also at the top of list were Sun's SunBlade x6420 "Ranger" system at the University of Texas, and the Cray Xt4 "Jaguar" system at Oak Ridge National Lab in Tennessee.
While IBM claimed the top slots, Intel continued to dominate the list, with Intel processors now found in 75 percent of the TOP500 supercomputers, up from 70.8 percent of the 30th list released last year.
-- Michelle Savage
Thursday, June 19, 2008
Mozilla: Firefox Downloads Surpass 8 Million
Mozilla claimed a new download record for the release of Firefox 3.0 yesterday. It said that the newest version of the Firefox Web browser was downloaded more than 8 million times in the first 24 hours it was available.
Firefox devotees united in an attempt to set a world record for most software downloads in a single day. The category is new, and not yet certified by Guinness World Records, but it is expected to be approved this week.
The Tuesday release was delayed more than an hour as eager users checking for the new release overloaded Firefox's Web servers. To further complicate things, the site was slow or unreachable for about two hours just before the scheduled release time. Fortunately, the servers recovered and users were able to download nearly on schedule.
And download they did! During peak periods, servers were accommodating more than 9,000 downloads per minute. Within 24 hours, Firefox 3.0 was downloaded 8.3 million times, beating Mozilla’s prediction of 5 million downloads.
So what’s the big deal with this release? It includes enhancements to help users organize their favorite Web sites and block access to sites known to distribute malicious software. It also allows Yahoo mail users to use Firefox 3 to send e-mail by clicking a "mailto" link they might come across when clicking on a name, or a "contact us" link on a Web page. Before, these links could only open a standalone, desktop e-mail program. Firefox 3 also offers new design and speed improvements.
-- Michelle Savage
Firefox devotees united in an attempt to set a world record for most software downloads in a single day. The category is new, and not yet certified by Guinness World Records, but it is expected to be approved this week.
The Tuesday release was delayed more than an hour as eager users checking for the new release overloaded Firefox's Web servers. To further complicate things, the site was slow or unreachable for about two hours just before the scheduled release time. Fortunately, the servers recovered and users were able to download nearly on schedule.
And download they did! During peak periods, servers were accommodating more than 9,000 downloads per minute. Within 24 hours, Firefox 3.0 was downloaded 8.3 million times, beating Mozilla’s prediction of 5 million downloads.
So what’s the big deal with this release? It includes enhancements to help users organize their favorite Web sites and block access to sites known to distribute malicious software. It also allows Yahoo mail users to use Firefox 3 to send e-mail by clicking a "mailto" link they might come across when clicking on a name, or a "contact us" link on a Web page. Before, these links could only open a standalone, desktop e-mail program. Firefox 3 also offers new design and speed improvements.
-- Michelle Savage
Wednesday, June 18, 2008
Noise on the Game Networks
As a video game enthusiast who landed a PlayStation 3 last Christmas, it’s been great to finally play games on the PlayStation Network. No longer do I have to stick with my PC for all my online gaming. It’s great to play a few rounds of Call of Duty 4 or Grand Theft Auto IV instead of being forced to rotate between DOTA and Day of Defeat.
Playing on the PSN is also my third major exposure to in-game voice chat, but the first time facing the notorious, oft-reported world of profane people (often children) heckling and cursing you out when playing them.
This is not news at all to anyone who’s ever played online, but I find it a hilarious phenomenon anyway. Before PSN, it was rare for me to encounter a chatter who would explode or otherwise disrupt the in-game voice chats by spamming noise so that nobody else could be heard. Usually, if anyone got out of hand, an admin could just step in and mute their Vent/Steam voice chat, and that would be that.
The servers I played on, which tended to be large and well organized, could be counted on to police that kind of behavior effectively. As such, the worst I ever encountered was someone playing their Casio keyboard into their mic, which brought back fond memories of my youth and my own keyboard. I wish I could remember that fellow’s name…
Anyway, the PSN is quite different. There are no admins and there are no organized servers; it’s just you and whoever else is out there randomly thrown together. I haven’t encountered too many voice chatters in GTAIV yet, but CoD4 provided a lot of material.
It’s probably not rocket science to figure out that the reason this kind of behavior is pervasive is because of anonymity. When you’re a 24-year-old playing in your own home, who is really going to discipline you for cracking racist jokes while waiting for a game to start? Who is really going to care, for that matter? Gamers have gone past the point where hearing a 10-year-old fling every curse in the book at you is anything special. It’s part of the landscape, and I think many of us find it fun.
So, if some kid in Tekonsha, Mich. wants to throw every slur up on the wall in Madden or NCAA Football, I say fire away, son.
-- Adam LoBelia
Playing on the PSN is also my third major exposure to in-game voice chat, but the first time facing the notorious, oft-reported world of profane people (often children) heckling and cursing you out when playing them.
This is not news at all to anyone who’s ever played online, but I find it a hilarious phenomenon anyway. Before PSN, it was rare for me to encounter a chatter who would explode or otherwise disrupt the in-game voice chats by spamming noise so that nobody else could be heard. Usually, if anyone got out of hand, an admin could just step in and mute their Vent/Steam voice chat, and that would be that.
The servers I played on, which tended to be large and well organized, could be counted on to police that kind of behavior effectively. As such, the worst I ever encountered was someone playing their Casio keyboard into their mic, which brought back fond memories of my youth and my own keyboard. I wish I could remember that fellow’s name…
Anyway, the PSN is quite different. There are no admins and there are no organized servers; it’s just you and whoever else is out there randomly thrown together. I haven’t encountered too many voice chatters in GTAIV yet, but CoD4 provided a lot of material.
It’s probably not rocket science to figure out that the reason this kind of behavior is pervasive is because of anonymity. When you’re a 24-year-old playing in your own home, who is really going to discipline you for cracking racist jokes while waiting for a game to start? Who is really going to care, for that matter? Gamers have gone past the point where hearing a 10-year-old fling every curse in the book at you is anything special. It’s part of the landscape, and I think many of us find it fun.
So, if some kid in Tekonsha, Mich. wants to throw every slur up on the wall in Madden or NCAA Football, I say fire away, son.
-- Adam LoBelia
Coffee break-ing news
While the Internet has made journalism a lot easier--thanks to e-mail, information repositories and endless streams of PDF formatted research reports--it's also made writing about something unique more difficult. Take, for example, my desire to write a new blog posting today on something I found on the BugTraq mailing list. When Craig Wright, manager for risk advisory services at BDO Kendalls Pty. Ltd., sent out a message to the ubiquitous BugTraq yesterday, stating that he could hack his coffee maker, I was naturally intrigued.
The run-down is as follows: The Jura Impressa F90 is a super high-end coffee machine that offers an optional Internet connection kit. Wright, naturally, threw some attacks at the thing and discovered that it ran Windows XP. He also discovered that he could take over the OS with remote attacks. What can you do with a hacked coffee machine? Well, you can make it spit out more water than the cup will hold, making a black puddle nearby. Or, you can spin the dials on all the coffee maker settings so that it essentially crashes when trying to make a cup of joe.
Oh, and there's no way to patch the thing to prevent these vulnerabilities.
Naturally, this is the sort of exciting story we here at Systems Management News would love to report on, just for giggles. It would even be worth getting ahold of Mr. Wright for an interview.
Unfortunately, because this is the Internet, the story has already been posted on Slashdot, Digg, Boingboing, and a host of other sites around the Web. Therefore, I felt that it would be relatively pointless for me to even mention the thing here.
Of course, I just did. It's hard not to get all reportery, when people go plugging their kitchen appliances into the Internet. Up until now, the only Internet-connected appliances I've ever seen were a refrigerator at Microsoft's headquarters (A strange and out-of-place steel affair sitting in a visitor center, alone in the waiting area), and the NetBSD project's seminal toaster. Anyone who's been to a conference where NetBSD had a booth has seen this thing: It's a red multi-slice toaster with an LED screen pasted onto the side. The fact that this contraption actually ran NetBSD really made no difference to the toaster: it still toasted in the normal fashion. But the fundamental point of that kitchen appliance was to prove that NetBSD can, in fact, run on just about anything.
So, now that we've cleared all this up, I'm off to make some good old-fashioned tea by putting water inside of a metal pot and placing it on top of an open flame. And while I may still have to worry about finding original stories to report in this competitive news industry, at least I won't have to worry about someone hitting up my beverage with a buffer overflow.
-- Alex Handy
The run-down is as follows: The Jura Impressa F90 is a super high-end coffee machine that offers an optional Internet connection kit. Wright, naturally, threw some attacks at the thing and discovered that it ran Windows XP. He also discovered that he could take over the OS with remote attacks. What can you do with a hacked coffee machine? Well, you can make it spit out more water than the cup will hold, making a black puddle nearby. Or, you can spin the dials on all the coffee maker settings so that it essentially crashes when trying to make a cup of joe.
Oh, and there's no way to patch the thing to prevent these vulnerabilities.
Naturally, this is the sort of exciting story we here at Systems Management News would love to report on, just for giggles. It would even be worth getting ahold of Mr. Wright for an interview.
Unfortunately, because this is the Internet, the story has already been posted on Slashdot, Digg, Boingboing, and a host of other sites around the Web. Therefore, I felt that it would be relatively pointless for me to even mention the thing here.
Of course, I just did. It's hard not to get all reportery, when people go plugging their kitchen appliances into the Internet. Up until now, the only Internet-connected appliances I've ever seen were a refrigerator at Microsoft's headquarters (A strange and out-of-place steel affair sitting in a visitor center, alone in the waiting area), and the NetBSD project's seminal toaster. Anyone who's been to a conference where NetBSD had a booth has seen this thing: It's a red multi-slice toaster with an LED screen pasted onto the side. The fact that this contraption actually ran NetBSD really made no difference to the toaster: it still toasted in the normal fashion. But the fundamental point of that kitchen appliance was to prove that NetBSD can, in fact, run on just about anything.
So, now that we've cleared all this up, I'm off to make some good old-fashioned tea by putting water inside of a metal pot and placing it on top of an open flame. And while I may still have to worry about finding original stories to report in this competitive news industry, at least I won't have to worry about someone hitting up my beverage with a buffer overflow.
-- Alex Handy
Yes, you should defrag your solid state drives
Two of the hottest trends in IT are solid-state drives and virtualization. Both have resulted in an accidental boon to Diskeeper, which just about owns the market for defragmentation utilities. In fact, the company is advising top SSD manufacturers about fragmentation, according to VP for public affairs Derek De Vette. Administrators are unsure what to do, posting queries on technology Web sites about defragging their SSDs. Interestingly, many experts are advising against SSD defragging, saying the concepts of contiguous placement and large-block storage are rendered moot by the new drives. Yet De Vette said fragmentation does occur, and that the performance hit of fragmentation is such that the hype of SSDs giving greater performance than mechanical disk drives hasn't yet been realized. As for virtualization, people understand that the hard drive can fragment, and so can the virtualized environment. But De Vette said most administrators are only beginning to realize that fragmentation can occur at the mapping level between the two layers. And he cautioned that too much fragmentation in a virtualized environment, just like in a physical one, can effectively shut it down.
-- David Rubinstein
-- David Rubinstein
Live, from HP Technology Forum
HP is making a few product announcements at its Technology Forum and Expo in Las Vegas this week, including change management and blade server technologies. But HP partners also have some news—here are the latest updates:
Ascert Provides Test Plug-in for Quality Center
Ascert today launched VersaTest Automation Plug-in for Quality Center, to provide a bridge between VersaTest Automator and HP's Quality Center that automatically creates central management, visibility, and a repository of tests and test results.
According to Rob Walker, managing partner of Ascert, VersaTest Automation Plug-in enables automation and expands the reach of Quality Center into parts of the enterprise that could not otherwise be accessed. Using the plug-in, Quality Center users can define and execute VersaTest server-level interface tests within Quality Center and validate the pass or fail results automatically.
Walker acknowledged that not all Quality Center users are willing to learn yet another product."So, we designed the plug-in to allow those users to execute VersaTest Automator tests and store test results from within the Quality Center software,” he said, adding that users do not have to acquire new skill sets to use it.
The VersaTest Automation Plug-in for Quality Center will run on Windows, Solaris and Linux servers.
HP User Groups “Connect”
Three large HP-focused user groups announced today that they have merged to provide a unified service to the 50,000 global users managing and maintaining old and new HP products and technologies.
By joining forces today, the former Encompass, HP-Interex EMEA and ITUG communities expect to expand their influence and power, while remaining independent of HP. The new group, called Connect, enables users to share knowledge and contacts while acting as a consumer advocate to HP.
The group plans to use Web 2.0 and social networking technologies to encourage community among its members and to attract a new generation of IT professionals, said Scott Healy, chairman of ITUG and vice president of industry solutions at Golden Gate Software.
-- Michelle Savage
Ascert Provides Test Plug-in for Quality Center
Ascert today launched VersaTest Automation Plug-in for Quality Center, to provide a bridge between VersaTest Automator and HP's Quality Center that automatically creates central management, visibility, and a repository of tests and test results.
According to Rob Walker, managing partner of Ascert, VersaTest Automation Plug-in enables automation and expands the reach of Quality Center into parts of the enterprise that could not otherwise be accessed. Using the plug-in, Quality Center users can define and execute VersaTest server-level interface tests within Quality Center and validate the pass or fail results automatically.
Walker acknowledged that not all Quality Center users are willing to learn yet another product."So, we designed the plug-in to allow those users to execute VersaTest Automator tests and store test results from within the Quality Center software,” he said, adding that users do not have to acquire new skill sets to use it.
The VersaTest Automation Plug-in for Quality Center will run on Windows, Solaris and Linux servers.
HP User Groups “Connect”
Three large HP-focused user groups announced today that they have merged to provide a unified service to the 50,000 global users managing and maintaining old and new HP products and technologies.
By joining forces today, the former Encompass, HP-Interex EMEA and ITUG communities expect to expand their influence and power, while remaining independent of HP. The new group, called Connect, enables users to share knowledge and contacts while acting as a consumer advocate to HP.
The group plans to use Web 2.0 and social networking technologies to encourage community among its members and to attract a new generation of IT professionals, said Scott Healy, chairman of ITUG and vice president of industry solutions at Golden Gate Software.
-- Michelle Savage
Monday, June 16, 2008
High on Hyper-V
I went to an instructor-led lab at Microsoft Tech-Ed IT Professionals on Friday, where I was guided through the new capabilities in Windows Server 2008 that will enable Hyper-V virtualization. Since there is typically a difference in the user experience of a person who writes about technology (me) and a person who works with it every day (everyone else in the lab), I stopped a few attendees on the way out. Overall, the feedback was positive. Here are their comments:
“It’s so much better than their previous releases. It’s finally getting there. It’s good to see.”
“We all thought Microsoft was going to put out a cheap but crappy product and blow a lot of smoke about why wee need to switch from VMWare. But it (Hyper-V) actually looks pretty good."
“I like it! It’s perfect for small businesses—it has a dummy-proof wizard that makes it easy to set up and manage VMs. Overall, it’s better than I expected.”
“There are pluses and minuses. Hyper-V comes with a good console. But they say you can’t turn off the drivers, which could be a problem."
“Ack…I don’t know…..I still don’t know.”
-Michelle Savage
“It’s so much better than their previous releases. It’s finally getting there. It’s good to see.”
“We all thought Microsoft was going to put out a cheap but crappy product and blow a lot of smoke about why wee need to switch from VMWare. But it (Hyper-V) actually looks pretty good."
“I like it! It’s perfect for small businesses—it has a dummy-proof wizard that makes it easy to set up and manage VMs. Overall, it’s better than I expected.”
“There are pluses and minuses. Hyper-V comes with a good console. But they say you can’t turn off the drivers, which could be a problem."
“Ack…I don’t know…..I still don’t know.”
-Michelle Savage
Friday, June 13, 2008
Quotes Flying? Better Duck!
Today, while working on a story about open source software in university IT systems, I had the distinct pleasure of speaking with a remarkably smart admin, whose name I can't use here. He's quoted in an upcoming story, but I can't call him by name in this piece because of some rather silly policies at his organization.
This fellow has his Unix down. He's a smooth operator with a vast knowledge of systems and software. But his statements are closely monitored by the university publicity department. They've obviously got everyone on campus trained well, because the admin told me we'd have to get approval for the story from these folks before we can run it.
He assured me that these were reasonable people, who wouldn't want to quibble with any details in the story; they'd just want to ensure they were covered from a liability standpoint. To illustrate this point, the admin told me that if I referred to his IT team by the college mascot name, something I was able to Duck in my article, that Systems Management News and I could be open to trademark lawsuit from the NCAA PAC 10 Conference. That mascot, is, after all, owned by the college and the conference.
I'm sure this was all a misunderstanding. I'm sure the university of this unnamed state, one of the many, many states in our nation that begins with the letter “O,” has no plans to sue us. I'm sure the fear was that we'd have a massive pull quote on the front page featuring the animal, cartoon character, and worst of all, the name of the college mascot. Or that we'd show a bump in single-issue sales for using a specific college logo on the cover. Or, heaven forbid, that our readers would learn that such bright, articulate people were associated with that university.
Unfortunately, such is life in this litigious society. And perhaps some universities are just too sensitive about becoming known as the place where Animal House was filmed.
-- Alex Handy
This fellow has his Unix down. He's a smooth operator with a vast knowledge of systems and software. But his statements are closely monitored by the university publicity department. They've obviously got everyone on campus trained well, because the admin told me we'd have to get approval for the story from these folks before we can run it.
He assured me that these were reasonable people, who wouldn't want to quibble with any details in the story; they'd just want to ensure they were covered from a liability standpoint. To illustrate this point, the admin told me that if I referred to his IT team by the college mascot name, something I was able to Duck in my article, that Systems Management News and I could be open to trademark lawsuit from the NCAA PAC 10 Conference. That mascot, is, after all, owned by the college and the conference.
I'm sure this was all a misunderstanding. I'm sure the university of this unnamed state, one of the many, many states in our nation that begins with the letter “O,” has no plans to sue us. I'm sure the fear was that we'd have a massive pull quote on the front page featuring the animal, cartoon character, and worst of all, the name of the college mascot. Or that we'd show a bump in single-issue sales for using a specific college logo on the cover. Or, heaven forbid, that our readers would learn that such bright, articulate people were associated with that university.
Unfortunately, such is life in this litigious society. And perhaps some universities are just too sensitive about becoming known as the place where Animal House was filmed.
-- Alex Handy
Wednesday, June 11, 2008
Microsoft Wants to Change Desktop Virtualization
Server virtualization is a hot topic at this year’s Tech-Ed IT Professionals conference, but Microsoft is bullish on the importance of application virtualization technology. In his keynote, Bob Muglia, senior vice president of Microsoft's Server and Tools Business unit, highlighted an untapped opportunity “to take and separate applications from the underlying operating system image, and allow those applications to be delivered much more effectively without going through a complex installation process.” He said we’ll see these technologies over the next few years.
To show off how far it has come in the desktop virtualization space, Microsoft demonstrated how it has integrated technology from Kidaro, a company it recently acquired, to develop its "Microsoft Enterprise Desktop Virtualization" product. This solution gives IT administrators the ability to "manage and deploy virtual PCs out to their end users' desktops," as per Jameel Khalfan, a product manager for Windows. Got an application that is incompatible with Vista? Kidaro lets it run in a virtual machine. The technology also lets users control copy-and-paste between a virtual machine (VM) and the host system. Users can also redirect URLs to a VM.
According to Khalfan, Microsoft Enterprise Desktop Virtualization application will be included in the Desktop Optimization Pack when that product is released next year. The general opinion of conference-goers is, that if Microsoft, can deliver on this promise, they’ll be heroes in the desktop virtualization market.
-- Michelle Savage
To show off how far it has come in the desktop virtualization space, Microsoft demonstrated how it has integrated technology from Kidaro, a company it recently acquired, to develop its "Microsoft Enterprise Desktop Virtualization" product. This solution gives IT administrators the ability to "manage and deploy virtual PCs out to their end users' desktops," as per Jameel Khalfan, a product manager for Windows. Got an application that is incompatible with Vista? Kidaro lets it run in a virtual machine. The technology also lets users control copy-and-paste between a virtual machine (VM) and the host system. Users can also redirect URLs to a VM.
According to Khalfan, Microsoft Enterprise Desktop Virtualization application will be included in the Desktop Optimization Pack when that product is released next year. The general opinion of conference-goers is, that if Microsoft, can deliver on this promise, they’ll be heroes in the desktop virtualization market.
-- Michelle Savage
RTI Has Google to Thank
Real Time Innovations Inc. (RTI), spun off from a Stanford University robotics research group, has been providing real-time middleware to the aerospace and defense industries for about a dozen years. Its systems are used to coordinate communications for transportation, intelligence and simulations, so that information picked up by a radar system, for instance, can be fed into a larger data pool where it can then be analyzed, prioritized and responded to in real time.
So, what was RTI, a company with deep roots in the industrial embedded systems market, doing at the SIFMA financial markets conference and expo that began today in New York City? “It’s about real-time, low-latency messaging,” RTI vice president David Barnett told me over breakfast. He said RTI is partnering with a consulting company called Zivlyn Systems LLC to develop a trading platform designed specifically to handle higher volumes and replace legacy trading systems. RTI created a “data cloud” configuration that allows applications to subscribe to it, and RTI figures out the message switching and routing, as well as providing caching, filtering and other services. (The company also announced extended support for the .NET Framework and languages, to create a single infrastructure to support high-performance trading in heterogeneous environments.)
But how does a company servicing the military-industrial complex get a foot in the door in the financial markets? “They came to us, actually,” Barnett said. “Google is how they found us. We talk about low-latency and real time and high throughput, and those were the keywords they found us with. It turns out these are real problems in this market.”
Now, RTI is on the financial world's radar.
-- David Rubinstein
So, what was RTI, a company with deep roots in the industrial embedded systems market, doing at the SIFMA financial markets conference and expo that began today in New York City? “It’s about real-time, low-latency messaging,” RTI vice president David Barnett told me over breakfast. He said RTI is partnering with a consulting company called Zivlyn Systems LLC to develop a trading platform designed specifically to handle higher volumes and replace legacy trading systems. RTI created a “data cloud” configuration that allows applications to subscribe to it, and RTI figures out the message switching and routing, as well as providing caching, filtering and other services. (The company also announced extended support for the .NET Framework and languages, to create a single infrastructure to support high-performance trading in heterogeneous environments.)
But how does a company servicing the military-industrial complex get a foot in the door in the financial markets? “They came to us, actually,” Barnett said. “Google is how they found us. We talk about low-latency and real time and high throughput, and those were the keywords they found us with. It turns out these are real problems in this market.”
Now, RTI is on the financial world's radar.
-- David Rubinstein
Tuesday, June 10, 2008
It's All in the Delivery
Vaclav Vincalek is puzzled. The founder of a startup software delivery provider called Boonbox recalls the days of ASP – application service providers – and how widely they were rejected by enterprises that scoffed at the notion of keeping their prized data and applications anywhere but behind locked and guarded doors. Now, just a few short years later, he can't believe that companies have the total trust to give up their data to outside organizations. Granted, security standards have come a long way -- or have they? Remember Hannaford, and how hackers stole credit card data from the supermarket chain's system that was certified PCI DSS compliant.
Boonbox is an offshoot of Pacific Coast Information Systems Ltd., (PCIS), an IT consultancy founded in 1995 to help businesses use the correct software to solve business problems. So Vincalek remembers the pushback to this method of delivery. "When e-mail was new and organizations wanted to install e-mail systems, we offered to host them, but they wanted the server in their server room. The mentality that e-mail would be moved out of the office was unheard of. " So the shift to more offsite hosting leaves Vincalek scratching his head, and taking shots at Google, where the application hosting platform is being built out, a la salesforce.com. "Google is the biggest threat to our privacy right now," Vincalek said. "They keep everything to themselves and don't tell you what they're doing with it."
-- David Rubinstein
Boonbox is an offshoot of Pacific Coast Information Systems Ltd., (PCIS), an IT consultancy founded in 1995 to help businesses use the correct software to solve business problems. So Vincalek remembers the pushback to this method of delivery. "When e-mail was new and organizations wanted to install e-mail systems, we offered to host them, but they wanted the server in their server room. The mentality that e-mail would be moved out of the office was unheard of. " So the shift to more offsite hosting leaves Vincalek scratching his head, and taking shots at Google, where the application hosting platform is being built out, a la salesforce.com. "Google is the biggest threat to our privacy right now," Vincalek said. "They keep everything to themselves and don't tell you what they're doing with it."
-- David Rubinstein
Monday, June 9, 2008
Facebook's Now an Open Book
Facebook has open-sourced major areas of the Facebook Platform. Why? Because developers asked them to.
In a recent announcement, the social networking company said that this is "just a first step" in a major release. Now developers or any third party can download source code, which includes "most of the code that runs Facebook Platform plus implementations of many of the most-used methods and tags."
Most of the open-source code is being made available via the Common Public Attribution License (CPAL), while the FBML parser is governed by the Mozilla Public License (MPL).
While allowing the developer community to play with and improve the code base of Facebook Platform is probably the biggest benefit for going open source, competing social Web sites can now access the code to support their own third-party application deployment.
Word in the Valley is that Facebook’s move is a reaction to Open Social, an open source platform that is supported by Google, MySpace and Yahoo. OpenSocial threatens Facebook's platform, as it has to potential to make it easier for social networking sites to match Facebook's catalog of third-party applications.
-- Michelle Savage
In a recent announcement, the social networking company said that this is "just a first step" in a major release. Now developers or any third party can download source code, which includes "most of the code that runs Facebook Platform plus implementations of many of the most-used methods and tags."
Most of the open-source code is being made available via the Common Public Attribution License (CPAL), while the FBML parser is governed by the Mozilla Public License (MPL).
While allowing the developer community to play with and improve the code base of Facebook Platform is probably the biggest benefit for going open source, competing social Web sites can now access the code to support their own third-party application deployment.
Word in the Valley is that Facebook’s move is a reaction to Open Social, an open source platform that is supported by Google, MySpace and Yahoo. OpenSocial threatens Facebook's platform, as it has to potential to make it easier for social networking sites to match Facebook's catalog of third-party applications.
-- Michelle Savage
Friday, June 6, 2008
GoogleTown -- Coming Soon to Mountain View
Internet giant Google is leasing land in Mountain View from NASA’s Ames Research Center to build a new research and development campus. But "high-tech campus" doesn’t quite describe what Google plans to build — it’s more like a mixed-use development.
The campus will contain 1.2 million square feet of office and research and development facilities on 42.2 acres in the research park. Here, Google will work on high-tech research projects, such as large-scale data management, massively distributed computing and human-to-computer interfaces.
But here’s where Google raises the bar. The company will also build "high-quality, affordable" housing on campus, in an attempt to attract top talent. It will also build restaurants, fitness facilities, a child care center, a basketball court, and conference and parking facilities for employees, while providing NASA with recreation and parking facilities and infrastructure improvements. There may even be room for retail shops in the future.
The lease is for 40 years but could be extended for up to 90 years. And it didn’t come cheap — Google agreed to pay $146 million over the lifetime of the lease.
-- Michelle Savage
The campus will contain 1.2 million square feet of office and research and development facilities on 42.2 acres in the research park. Here, Google will work on high-tech research projects, such as large-scale data management, massively distributed computing and human-to-computer interfaces.
But here’s where Google raises the bar. The company will also build "high-quality, affordable" housing on campus, in an attempt to attract top talent. It will also build restaurants, fitness facilities, a child care center, a basketball court, and conference and parking facilities for employees, while providing NASA with recreation and parking facilities and infrastructure improvements. There may even be room for retail shops in the future.
The lease is for 40 years but could be extended for up to 90 years. And it didn’t come cheap — Google agreed to pay $146 million over the lifetime of the lease.
-- Michelle Savage
Tomcat Vulnerable to HTML-Based Attack
The Apache Foundation's Tomcat Java application server is vulnerable to an HTML-based attack. The vulnerability, disclosed Wednesday and updated yesterday, allows remote attackers to inject HTML code into the hostname field of the host manager screen. The resulting code injection could be used to gather up administration cookies, allowing an attacker to take over the system if the operator has enable cookie-based authentication.
Tomcat version 5.5.9 through 5.5.26 and 6.0.0 through 6.0.16 are affected by this vulnerability. Tomcat does not check input in the hostname field for cleanliness, and thus allows this injection. As of today, Apache has not released a patch for this vulnerability.
--Alex Handy
Tomcat version 5.5.9 through 5.5.26 and 6.0.0 through 6.0.16 are affected by this vulnerability. Tomcat does not check input in the hostname field for cleanliness, and thus allows this injection. As of today, Apache has not released a patch for this vulnerability.
--Alex Handy
Oh, the drama at Yahoo!
I’m not sure why any of us bother to watch staged reality TV shows when the real drama is unfolding before our very eyes in the form of Yahoo and Microsoft letters.
In the latest open letter to Yahoo Chairman Roy Bostock, billionaire investor Carl Icahn on Wednesday used the words "deceitful," "self-destructive," "misleading" and "insulting to shareholders" to express his frustration with what he sees as the "inordinate lengths" the company has gone to in keeping Microsoft from buying Yahoo.
Icahn's letter was sparked by details disclosed earlier this week in a lawsuit filed by Yahoo shareholders who disagree with the way Yahoo handled Microsoft’s recent $44.6 billion acquisition offer. Icahn wrote that Yang and other board members used unnecessary tactics, including a costly severance plan for Yahoo employees, to “entrench their positions and keep shareholders from deciding if they wished to sell to Microsoft," citing details from the shareholder suit.
He said that merging with Microsoft is the "only way to salvage" the company. "It is insulting to shareholders that Yahoo for the last month has told us that they are quite willing to negotiate a sale of the company to Microsoft and cannot understand why Microsoft has walked away," Icahn wrote. "However, the board conveniently neglected to inform shareholders about the magnitude of the plan it installed which made it practically impossible for Microsoft to stay at the bargaining table."
The company’s next shareholder meeting is Aug. 1, and Icahn has said he'll try to oust CEO Jerry Yang and others if they don’t change their ways. "It may be too late to convince Microsoft to trust Yang and the current board to run the company during that period while Microsoft sits on the sidelines with $45 billion at risk. Therefore, the best chance to bring Microsoft and Yahoo together is to replace Yang and the current Yahoo board with a board that will negotiate in good faith with Microsoft," he added.
Yahoo resisted the attack, saying in its reply letter that Icahn's criticism "seriously misrepresents and manipulates the facts.”
Icahn may want to get rid of Yang, but it will be hard to find a cheaper CEO. Yahoo's proxy filing cites Jerry Yang's salary in 2007 as $1, with no accounts of other compensation. Of course, he owns 3.9 percent of the company, but that would be his regardless.
-- Michelle Savage
In the latest open letter to Yahoo Chairman Roy Bostock, billionaire investor Carl Icahn on Wednesday used the words "deceitful," "self-destructive," "misleading" and "insulting to shareholders" to express his frustration with what he sees as the "inordinate lengths" the company has gone to in keeping Microsoft from buying Yahoo.
Icahn's letter was sparked by details disclosed earlier this week in a lawsuit filed by Yahoo shareholders who disagree with the way Yahoo handled Microsoft’s recent $44.6 billion acquisition offer. Icahn wrote that Yang and other board members used unnecessary tactics, including a costly severance plan for Yahoo employees, to “entrench their positions and keep shareholders from deciding if they wished to sell to Microsoft," citing details from the shareholder suit.
He said that merging with Microsoft is the "only way to salvage" the company. "It is insulting to shareholders that Yahoo for the last month has told us that they are quite willing to negotiate a sale of the company to Microsoft and cannot understand why Microsoft has walked away," Icahn wrote. "However, the board conveniently neglected to inform shareholders about the magnitude of the plan it installed which made it practically impossible for Microsoft to stay at the bargaining table."
The company’s next shareholder meeting is Aug. 1, and Icahn has said he'll try to oust CEO Jerry Yang and others if they don’t change their ways. "It may be too late to convince Microsoft to trust Yang and the current board to run the company during that period while Microsoft sits on the sidelines with $45 billion at risk. Therefore, the best chance to bring Microsoft and Yahoo together is to replace Yang and the current Yahoo board with a board that will negotiate in good faith with Microsoft," he added.
Yahoo resisted the attack, saying in its reply letter that Icahn's criticism "seriously misrepresents and manipulates the facts.”
Icahn may want to get rid of Yang, but it will be hard to find a cheaper CEO. Yahoo's proxy filing cites Jerry Yang's salary in 2007 as $1, with no accounts of other compensation. Of course, he owns 3.9 percent of the company, but that would be his regardless.
-- Michelle Savage
Wednesday, June 4, 2008
Chinese Hackers Going Crazy Everywhere
Metasploit was hacked! Metasploit got hax0r3d! OMG, FYI, beware!
But wait, it’s not as bad as all that. The apotheosis of cool hacker tools was indeed attacked, but as it turns out, the Chinese hackers responsible never actually got into Metasploit’s servers.
According to HDM (H.D. Moore, project lead on your arch nemesis: Metasploit, the application exploitation payload framework), reports of Metasploit’s Web site being hacked were greatly exaggerated. In fact, they were just a testament to the old adage: “There is no such thing as 100 percent security.”
“They can’t pwn the real server, so they pwn one next to it,” wrote HD on IRC. “Then use that to 'man-in-the-middle’ the http responses and inject their own code.”
Essentially, the Chinese hackers—who have been quite active everywhere recently—no that’s not just your logs that see this—who wanted to own Metasploit had to compromise the entire ISP at which Metasploit’s Web site is hosted. When requests came in, the server nearest it on the switch played stand-in. Not that it mattered. The actual code of Metasploit is hosted elsewhere, and the MD5’s wouldn’t match up.
After all the kerfuffle over Chinese hackers I’ve heard over the last week and a half, I have to wonder if some of the resident rebels in China aren’t being forced into such nefarious hack attacks by government policies. I’m not saying that the Chinese government is encouraging hacking of foreign systems, but the country’s internal filtering and censoring policies could be forcing rebellious teens into hacking by default.
Since blogging about the mistakes of China’s policies is essentially illegal, it’s likely that the computer-literate—who in the U.S. write oodles of blogs and protest in the streets—have given up on effecting political change, and have instead spent their lives learning how to mess with other people’s data.
Everyone knows about the great firewall of China: that filter that keeps dissident content out of Chinese computers. Unfortunately, nothing seems to be filtering what’s coming out of China: an ever increasing flow of nasty packets aimed at bringing foreign servers to their knees.
And something about the month of May was particularly exciting for the Chinese hacker world. I’ve heard from a number of sources that their systems were under particularly high volumes of attacks as the month went on. Maybe this is just how China gets ready for the Olympics.
-- Alex Handy
But wait, it’s not as bad as all that. The apotheosis of cool hacker tools was indeed attacked, but as it turns out, the Chinese hackers responsible never actually got into Metasploit’s servers.
According to HDM (H.D. Moore, project lead on your arch nemesis: Metasploit, the application exploitation payload framework), reports of Metasploit’s Web site being hacked were greatly exaggerated. In fact, they were just a testament to the old adage: “There is no such thing as 100 percent security.”
“They can’t pwn the real server, so they pwn one next to it,” wrote HD on IRC. “Then use that to 'man-in-the-middle’ the http responses and inject their own code.”
Essentially, the Chinese hackers—who have been quite active everywhere recently—no that’s not just your logs that see this—who wanted to own Metasploit had to compromise the entire ISP at which Metasploit’s Web site is hosted. When requests came in, the server nearest it on the switch played stand-in. Not that it mattered. The actual code of Metasploit is hosted elsewhere, and the MD5’s wouldn’t match up.
After all the kerfuffle over Chinese hackers I’ve heard over the last week and a half, I have to wonder if some of the resident rebels in China aren’t being forced into such nefarious hack attacks by government policies. I’m not saying that the Chinese government is encouraging hacking of foreign systems, but the country’s internal filtering and censoring policies could be forcing rebellious teens into hacking by default.
Since blogging about the mistakes of China’s policies is essentially illegal, it’s likely that the computer-literate—who in the U.S. write oodles of blogs and protest in the streets—have given up on effecting political change, and have instead spent their lives learning how to mess with other people’s data.
Everyone knows about the great firewall of China: that filter that keeps dissident content out of Chinese computers. Unfortunately, nothing seems to be filtering what’s coming out of China: an ever increasing flow of nasty packets aimed at bringing foreign servers to their knees.
And something about the month of May was particularly exciting for the Chinese hacker world. I’ve heard from a number of sources that their systems were under particularly high volumes of attacks as the month went on. Maybe this is just how China gets ready for the Olympics.
-- Alex Handy
Tuesday, June 3, 2008
Houston, We Have a Fire
The explosion and fire in Planet.com’s Houston, Texas, data center facility over this past weekend served as a scorching reminder of the importance of a strong backup plan.
Houston Fire Department officials said that the Planet.com Internet Services data center, where it creates, hosts and maintains Web sites for its clients, was rocked by an explosion in a network gear room. Apparently, no servers or networking equipment were damaged, and no one was hurt, but power was cut to the facility, affecting about 9,000 servers. The blast was strong enough to push three walls of the facility out of place. The incident was attributed to an electrical problem with a transformer. With an estimated 7,500 of its clients hoping the issue comes to a quick resolution, Planet.com is putting its recovery plan into action. Some servers will be relying on generator power for a week until normal utility connections are restored, according to Douglas Erwin, Planet.com’s CEO.
One of the interesting things that Planet.com employees are doing is providing updates on the data center’s progress through an online forum. This is seen as an important part of the disaster recovery plan. The latest updates say that the company is doing a rack-by-rack check for any servers that require technical support. Erwin said that 6,000 of the 9,000 servers have been restored, and the next step is to rebuild the electrical room, which will take place in the next week or so.
This fire certainly serves as a reminder that a good solid backup plan is critically important. Of course it is impossible to prepare for everything, but certain steps can be taken to assuage problems. Planet.com, for instance, added a backup server in March with continuous data protection. Turns out it wasn’t such a bad idea.
-- Jeff Feinman
Houston Fire Department officials said that the Planet.com Internet Services data center, where it creates, hosts and maintains Web sites for its clients, was rocked by an explosion in a network gear room. Apparently, no servers or networking equipment were damaged, and no one was hurt, but power was cut to the facility, affecting about 9,000 servers. The blast was strong enough to push three walls of the facility out of place. The incident was attributed to an electrical problem with a transformer. With an estimated 7,500 of its clients hoping the issue comes to a quick resolution, Planet.com is putting its recovery plan into action. Some servers will be relying on generator power for a week until normal utility connections are restored, according to Douglas Erwin, Planet.com’s CEO.
One of the interesting things that Planet.com employees are doing is providing updates on the data center’s progress through an online forum. This is seen as an important part of the disaster recovery plan. The latest updates say that the company is doing a rack-by-rack check for any servers that require technical support. Erwin said that 6,000 of the 9,000 servers have been restored, and the next step is to rebuild the electrical room, which will take place in the next week or so.
This fire certainly serves as a reminder that a good solid backup plan is critically important. Of course it is impossible to prepare for everything, but certain steps can be taken to assuage problems. Planet.com, for instance, added a backup server in March with continuous data protection. Turns out it wasn’t such a bad idea.
-- Jeff Feinman
Monday, June 2, 2008
Sun Patches Solaris
Sun Microsystems patched a number of vulnerabilities in Solaris 8, 9 and 10 over the past few days. Three stack-based buffer overflows in the SAMBA 3.0 code in Solaris 9 and 10 were patched. These three vulnerabilities could have allowed a remote user to inject code through SAMBA requests across a network. The patch for these vulnerabilities is available now. A second patch also can be obtained online. Additionally, Sun issued patches for Solaris 8, 9 and 10 that fix a hole in Crontab. Malicious users could potentially escalate their privileges on a system by creating race conditions in the Crontab utility. A fix is available.
--Alex Handy
--Alex Handy
Friday, May 30, 2008
Juniper Switches Tracks
Having only come into this network equipment beat in March, it was a surprise to me to learn of Juniper’s new move into the switch market. Yesterday morning, I headed down to Juniper's executive business center where Bobby Guhasarkar, senior manager of product marketing Ethernet platforms group, gave me a run-down on the company’s new switches.
Brand new switches. It seems that Juniper has never made switches. I was unaware of this. I’d always assumed they had such equipment since they specialized in network infrastructure and control systems like routers and firewalls.
But not switches. Until this year. Juniper began shipping its first switches in March. The EX 3200 and the EX 4200 are stand-alone units that run JUNOS. They were designed from scratch and offer much of the functionality that’s available in Juniper’s routers.
Guhasarkar said that the design goal was, from the start, to build a cheaper layer 3 switch with low latency. The use of Juniper’s existing network hardware operating system in these switches means that they can be administrated through Juniper’s existing management software. That also means these switches can do things like traffic shaping, mirroring and monitoring.
“Traffic shaping is something JUNOS has had for a long long time, and the beauty of what we get in this first release of these switches is ninth-generation software. It’s the same software that’s been running on all the routers. It’s the same traffic shaping code, the same SNMP code, the same chassis management code,” said Guhasarkar.
Another interesting feature of these new switches is the 4200’s ability to band together with 10 of its cohorts to form a virtual chassis. Such a chassis can be administrated as a single entity instead of a stack of individual switches.
With all these new-fangled features, I had to ask Guhasarkar where the line is between a switch and a router.
“Well it doesn’t make it a router in the Internet sense. We can’t accept a million routes from the Internet,” said Guhasarkar. The main difference between their routers and switches are in the logical scale tables. The key difference is “How many MAC addresses does it have to keep track of? At juniper when we say router, we kind of think about them as very large serious boxes. You can’t get in a US$4,000 switch what you can get in a million-dollar router.”
-- Alex Handy
Brand new switches. It seems that Juniper has never made switches. I was unaware of this. I’d always assumed they had such equipment since they specialized in network infrastructure and control systems like routers and firewalls.
But not switches. Until this year. Juniper began shipping its first switches in March. The EX 3200 and the EX 4200 are stand-alone units that run JUNOS. They were designed from scratch and offer much of the functionality that’s available in Juniper’s routers.
Guhasarkar said that the design goal was, from the start, to build a cheaper layer 3 switch with low latency. The use of Juniper’s existing network hardware operating system in these switches means that they can be administrated through Juniper’s existing management software. That also means these switches can do things like traffic shaping, mirroring and monitoring.
“Traffic shaping is something JUNOS has had for a long long time, and the beauty of what we get in this first release of these switches is ninth-generation software. It’s the same software that’s been running on all the routers. It’s the same traffic shaping code, the same SNMP code, the same chassis management code,” said Guhasarkar.
Another interesting feature of these new switches is the 4200’s ability to band together with 10 of its cohorts to form a virtual chassis. Such a chassis can be administrated as a single entity instead of a stack of individual switches.
With all these new-fangled features, I had to ask Guhasarkar where the line is between a switch and a router.
“Well it doesn’t make it a router in the Internet sense. We can’t accept a million routes from the Internet,” said Guhasarkar. The main difference between their routers and switches are in the logical scale tables. The key difference is “How many MAC addresses does it have to keep track of? At juniper when we say router, we kind of think about them as very large serious boxes. You can’t get in a US$4,000 switch what you can get in a million-dollar router.”
-- Alex Handy
Thursday, May 29, 2008
Four Things Businesses Suck At
Design. Costs. Operations. Risk.
Most businesses suck at these things.
At least that's what Chris Crosby told me during a recent visit to the SMN offices here in beautiful Huntington, Long Island.
Crosby is senior vice president at Digital Realty Trust, a company that provides data center facilities “between the do-it-yourself guys and the full providers.”
At the design stage, Crosby noted that companies leave out business decision-makers. That's unfortunate, he said, because the designers want to create elegant solutions that might not factor in cost, and-worse-might not meet the company's needs. And, of course, there's always the question of “Will it work?” Meanwhile, business people are left to determine the risk of a project without really grasping the complexities involved, or being able to see that what they perceive as complex might actually be a relatively simple matter to implement.
“What makes IT think it can build a data center?” Crosby wondered. “Look, planes are important to FedEx, but they don't build 'em. We try to boil it down to a business decision” for prospective customers.
As for risk and cost, Crosby debunked recent studies that advocate moving data centers to remote parts of the United States, so as to not be terror targets and to take advantage of lower utility costs. He noted: “When you have to fly a guy out there to do anything, you lose that cost benefit pretty quick.”
Not to mention that before 9/11, one of the biggest terror attacks in our nation's history happened in Oklahoma City.
-- David Rubinstein
Most businesses suck at these things.
At least that's what Chris Crosby told me during a recent visit to the SMN offices here in beautiful Huntington, Long Island.
Crosby is senior vice president at Digital Realty Trust, a company that provides data center facilities “between the do-it-yourself guys and the full providers.”
At the design stage, Crosby noted that companies leave out business decision-makers. That's unfortunate, he said, because the designers want to create elegant solutions that might not factor in cost, and-worse-might not meet the company's needs. And, of course, there's always the question of “Will it work?” Meanwhile, business people are left to determine the risk of a project without really grasping the complexities involved, or being able to see that what they perceive as complex might actually be a relatively simple matter to implement.
“What makes IT think it can build a data center?” Crosby wondered. “Look, planes are important to FedEx, but they don't build 'em. We try to boil it down to a business decision” for prospective customers.
As for risk and cost, Crosby debunked recent studies that advocate moving data centers to remote parts of the United States, so as to not be terror targets and to take advantage of lower utility costs. He noted: “When you have to fly a guy out there to do anything, you lose that cost benefit pretty quick.”
Not to mention that before 9/11, one of the biggest terror attacks in our nation's history happened in Oklahoma City.
-- David Rubinstein
Wednesday, May 28, 2008
Microsoft Lets You Look AND Touch
Microsoft executives last night demonstrated the user interface for Windows 7, which will bring touch-screen technology to PCs.
At the Wall Street Journal’s annual D6 meeting, Microsoft Chairman Bill Gates and chief executive Steve Ballmer said that “multi-touch” will give customers the ability to use their fingers to control their screens, as opposed to existing mice, keyboard and pen-based user controls. This capability is similar to what is seen in Mac OS X products like the Apple iPhone and iPod touch.
Microsoft’s multitouch technology was originally developed for Microsoft's tabletop Surface device, which is used by hotels and casinos. According to Gates, the technology represents the beginning of computing based on a new generation of input systems, such as "speech, gesture, vision, ink."
A key feature of the technology allows for multiple touches simultaneously; for instance, dragging five fingers across a screen would draw five separate lines. The executives said this technology is perfect for editing digital photos and navigating Internet-mapping services.
Ballmer said Windows 7 will arrive in late 2009.
A video demonstration of Windows 7's multi-touch capabilities is available for viewing on the Microsoft blog.
-- Michelle Savage
At the Wall Street Journal’s annual D6 meeting, Microsoft Chairman Bill Gates and chief executive Steve Ballmer said that “multi-touch” will give customers the ability to use their fingers to control their screens, as opposed to existing mice, keyboard and pen-based user controls. This capability is similar to what is seen in Mac OS X products like the Apple iPhone and iPod touch.
Microsoft’s multitouch technology was originally developed for Microsoft's tabletop Surface device, which is used by hotels and casinos. According to Gates, the technology represents the beginning of computing based on a new generation of input systems, such as "speech, gesture, vision, ink."
A key feature of the technology allows for multiple touches simultaneously; for instance, dragging five fingers across a screen would draw five separate lines. The executives said this technology is perfect for editing digital photos and navigating Internet-mapping services.
Ballmer said Windows 7 will arrive in late 2009.
A video demonstration of Windows 7's multi-touch capabilities is available for viewing on the Microsoft blog.
-- Michelle Savage
Friday, May 23, 2008
Egg-cited for Ballmer in Hungary
During a speech at a Hungarian university this week, Microsoft chief executive officer Steve Ballmer was pelted with eggs by a protestor, who accused Microsoft of stealing money from the Hungarian people.
It should have been a nice, friendly affair. The speech Ballmer gave was titled "You Can Change The World" and his audience was a group of business and technology students at Budapest's Corvinus University. Things were moving along smoothly until a man wearing glasses and a shirt that read “Microsoft Corruption” stood up and began hurling eggs at Ballmer.
As Ballmer ducked behind a podium, the man left peacefully, escorted by a university official. To his credit, Ballmer handled the situation well. He smiled and joked: "It was a friendly disruption." He later said that his first thought was that he had to keep his suit clean, as eggs don’t wash off easily.
This was not the first food attack on a Microsoft executive. In 1999, protesters pelted Bill Gates with custard pies.
-- Michelle Savage
It should have been a nice, friendly affair. The speech Ballmer gave was titled "You Can Change The World" and his audience was a group of business and technology students at Budapest's Corvinus University. Things were moving along smoothly until a man wearing glasses and a shirt that read “Microsoft Corruption” stood up and began hurling eggs at Ballmer.
As Ballmer ducked behind a podium, the man left peacefully, escorted by a university official. To his credit, Ballmer handled the situation well. He smiled and joked: "It was a friendly disruption." He later said that his first thought was that he had to keep his suit clean, as eggs don’t wash off easily.
This was not the first food attack on a Microsoft executive. In 1999, protesters pelted Bill Gates with custard pies.
-- Michelle Savage
Thursday, May 22, 2008
A Virtual Job Bonanza
In today’s gloomy economy, IT is one of many flat job markets. But, surprisingly, there is one area that seems unaffected -- virtualization. According to IT job board Dice, virtualization is the fastest-growing area of job growth in IT.
Dice announced this month that it has seen a 40 percent increase in job listings that require VMware experience in the past six months. A Dice poll revealed that 40 percent of respondents, who are IT professionals, said that they had "virtualized a significant number of servers and services."
Currently, Dice said that few job listings call for Hyper-V knowledge but the company is watching closely to see if demand will grow once Microsoft releases the product.
Tom Silver, Dice’s senior vice president of marketing and customer support, expects an even greater jump in virtualization jobs. He cites a McKinsey study, which said that data centers are expected to surpass the airline industry as a greenhouse gas polluter by 2020. According to Silver, "the need for a greener approach will help drive virtualization."
Silver’s forecast is backed by statistics from research firm IDC, which forecast that the market for virtualization will grow to $23.5 billion in 2011, a 27.1 percent increase in compound annual growth from 2006.
-- Michelle Savage
Dice announced this month that it has seen a 40 percent increase in job listings that require VMware experience in the past six months. A Dice poll revealed that 40 percent of respondents, who are IT professionals, said that they had "virtualized a significant number of servers and services."
Currently, Dice said that few job listings call for Hyper-V knowledge but the company is watching closely to see if demand will grow once Microsoft releases the product.
Tom Silver, Dice’s senior vice president of marketing and customer support, expects an even greater jump in virtualization jobs. He cites a McKinsey study, which said that data centers are expected to surpass the airline industry as a greenhouse gas polluter by 2020. According to Silver, "the need for a greener approach will help drive virtualization."
Silver’s forecast is backed by statistics from research firm IDC, which forecast that the market for virtualization will grow to $23.5 billion in 2011, a 27.1 percent increase in compound annual growth from 2006.
-- Michelle Savage
Stand By Your Debian
The story so far: the Debian distribution of Linux has been having OpenSSL troubles recently, as it came to light that the OS has been using poor random number generation since 2006. When the news became public three weeks ago, it turned out that both Debian and Ubuntu were generating SSL certifications with a random number space of around 32,000 possibilities.
As a result, anyone handing out certs from a Debian or Ubuntu system has spent the past month regenerating and redistributing their entire library of OpenSSL encryption keys. I asked Mark Shuttleworth, founder of Canonical and Ubuntu Linux, if Debian had lost its credibility through this affair. He sent me the following e-mail reply, which I've reprinted in it's entirety:
"It was certainly a very serious security issue, and I understand where your concerns are coming from, but for the record I am still confident that the Debian approach of self-motivated and largely self-selected specialist maintainers results in the best overall quality of packages in something the scale of Ubuntu or Debian. We have no plans to shift to a different model for Ubuntu than collaborating closely with Debian. Of course, we think that Canonical and Ubuntu's process, as an additional layer, does add something, but we consider Debian to be a superb, diligent and effective community with which to collaborate.
If you look at the sequence of events, the Debian maintainer actually took the patch to the designated upstream mailing list, where he got a response from an upstream developers suggesting that the patch was fine. He followed what most folks would consider to be reasonable best practice, and in the final analysis we can't attribute the result to anything other than very a unfortunate combination of errors. The process was not intrinsically broken.
Ubuntu maintainers didn't fix the issue a week before Debian -- we worked with the Debian maintainers and uploaded fixed packages simultaneously in both places. Some process issues on the Debian side held up the fixes there for a little while, but in principle the work was done jointly. As always, Debian maintainers contribute a great and unique depth of expertise.
We are still conducting a review of this failure, so we will probably make some changes. Among other things, we expect to contract or otherwise engage external consultants for regular reviews of security-critical packages in both Debian and Ubuntu. We can help both Debian and Ubuntu achieve an even higher level of security awareness and protection, and we don't see it as something on which we would compete with Debian so much as collaborate. Ubuntu's security track record until this event has been exceptional, and while the Ubuntu team does a tremendous amount of work that is specific to Ubuntu, we also benefit greatly from our collaboration with the huge community of Debian maintainers
-- Alex Handy
As a result, anyone handing out certs from a Debian or Ubuntu system has spent the past month regenerating and redistributing their entire library of OpenSSL encryption keys. I asked Mark Shuttleworth, founder of Canonical and Ubuntu Linux, if Debian had lost its credibility through this affair. He sent me the following e-mail reply, which I've reprinted in it's entirety:
"It was certainly a very serious security issue, and I understand where your concerns are coming from, but for the record I am still confident that the Debian approach of self-motivated and largely self-selected specialist maintainers results in the best overall quality of packages in something the scale of Ubuntu or Debian. We have no plans to shift to a different model for Ubuntu than collaborating closely with Debian. Of course, we think that Canonical and Ubuntu's process, as an additional layer, does add something, but we consider Debian to be a superb, diligent and effective community with which to collaborate.
If you look at the sequence of events, the Debian maintainer actually took the patch to the designated upstream mailing list, where he got a response from an upstream developers suggesting that the patch was fine. He followed what most folks would consider to be reasonable best practice, and in the final analysis we can't attribute the result to anything other than very a unfortunate combination of errors. The process was not intrinsically broken.
Ubuntu maintainers didn't fix the issue a week before Debian -- we worked with the Debian maintainers and uploaded fixed packages simultaneously in both places. Some process issues on the Debian side held up the fixes there for a little while, but in principle the work was done jointly. As always, Debian maintainers contribute a great and unique depth of expertise.
We are still conducting a review of this failure, so we will probably make some changes. Among other things, we expect to contract or otherwise engage external consultants for regular reviews of security-critical packages in both Debian and Ubuntu. We can help both Debian and Ubuntu achieve an even higher level of security awareness and protection, and we don't see it as something on which we would compete with Debian so much as collaborate. Ubuntu's security track record until this event has been exceptional, and while the Ubuntu team does a tremendous amount of work that is specific to Ubuntu, we also benefit greatly from our collaboration with the huge community of Debian maintainers
-- Alex Handy
Wednesday, May 21, 2008
Twitterpated with Ruby? Not so fast!
Where's my Twitter? Tangled up in a messy back-end of clogged threads and bad clustering solutions. I'm still digging into the dirty undercarriage of the Twitter fiasco, but the initial clues point to Ruby on Rails. Turns out, this darling of Web design isn't exactly a speed demon. (Editor's note: SMN contributing writer Lisa Morgan, who's also SVP and principal analyst at Online Market World, has coined the term "Twitter Flitter" to describe the sometimes-on, sometimes-off phenomenon).
When Twitter burst onto the scene at SXSW in 2006, the microblogging service was hailed as the latest hot startup, something with a truly unique technology for collaboration, social connection and mobile friend-tracking. All with a simple Web app that restricted microblog entries to just a handful of words.
Nothing could have been cooler or more superfluous at the same time. The name itself hints at a fluttering of attention; a hyper-active lack of focus, not unlike the symptoms of ADD. And, as it turns out, the infrastructure behind Twitter may have been chosen in just such a moment of hedonistic bohemianism.
Ruby-on-Rails was brand new in 2006, and it was a true mash-up. Niche programming language from wacky Japanese guy meets pissed-off Web developer, sick of all that was Perl, Java and ASP. The language-and-framework was called the second coming of Java, by some, and those that wrote in it laughed as they ended lines with !'s or question marks. Ruby wasn't just easy, it was fun!
Too bad it doesn't scale. Twitter is now faced with only two options: work with RoR big-wigs to change how everything works in the framework, without breaking compatibility, or abandoning the entire apparatus and start from scratch in Java. Or C#. Maybe even Perl or PHP. Could Python handle the load? Perhaps.
When millions of users come hammering at your door to use your services, life is just easier when you're standing on top of Apache, IBM, Microsoft or Sun. So, here's hoping that the mess at Twitter gets cleaned up. It's a fun service.
And, it's evident the Twitterpated have known about this Ruby problem for some time. Perhaps some sort of JRuby hack can be created. I'd bet that some good old fashioned JDBC would help alleviate some of the database bottlenecking, and Tomcat can work wonders for a clogged pipeline. I'm sure this is all more complicated, and that the good folks at Twitter are still in their offices in South Park (yes, really, that's where they are) working to fix things. And I bet they'll solve this problem. It is from this sort of adversity that battle-worn, profitable startups are born.
-- Alex Handy
When Twitter burst onto the scene at SXSW in 2006, the microblogging service was hailed as the latest hot startup, something with a truly unique technology for collaboration, social connection and mobile friend-tracking. All with a simple Web app that restricted microblog entries to just a handful of words.
Nothing could have been cooler or more superfluous at the same time. The name itself hints at a fluttering of attention; a hyper-active lack of focus, not unlike the symptoms of ADD. And, as it turns out, the infrastructure behind Twitter may have been chosen in just such a moment of hedonistic bohemianism.
Ruby-on-Rails was brand new in 2006, and it was a true mash-up. Niche programming language from wacky Japanese guy meets pissed-off Web developer, sick of all that was Perl, Java and ASP. The language-and-framework was called the second coming of Java, by some, and those that wrote in it laughed as they ended lines with !'s or question marks. Ruby wasn't just easy, it was fun!
Too bad it doesn't scale. Twitter is now faced with only two options: work with RoR big-wigs to change how everything works in the framework, without breaking compatibility, or abandoning the entire apparatus and start from scratch in Java. Or C#. Maybe even Perl or PHP. Could Python handle the load? Perhaps.
When millions of users come hammering at your door to use your services, life is just easier when you're standing on top of Apache, IBM, Microsoft or Sun. So, here's hoping that the mess at Twitter gets cleaned up. It's a fun service.
And, it's evident the Twitterpated have known about this Ruby problem for some time. Perhaps some sort of JRuby hack can be created. I'd bet that some good old fashioned JDBC would help alleviate some of the database bottlenecking, and Tomcat can work wonders for a clogged pipeline. I'm sure this is all more complicated, and that the good folks at Twitter are still in their offices in South Park (yes, really, that's where they are) working to fix things. And I bet they'll solve this problem. It is from this sort of adversity that battle-worn, profitable startups are born.
-- Alex Handy
Tuesday, May 20, 2008
Microsoft Can Have You Seeing Stars
If you’ve always wanted to be an astronomer but aren’t quite ready to quit your day job, Microsoft has just the solution for you.
Computer users now play astronomer, thanks to WorldWide Telescope by Microsoft. The free, virtual service combines images and databases from every major telescope and astronomical organization in the world, allowing users to take a virtual tour of the night sky.
The WorldWide Telescope stitches together terabytes of high-resolution images of celestial bodies from a variety of sources, including the Hubble Space Telescope, the Chandra X-Ray Observatory Center, and the Spitzer Space Telescope. It then displays them in a way that replicates their actual position in the sky.
Through a video game-like experience, users can freely browse through the solar system, galaxy and beyond, or take guided tours of the sky hosted by astronomers and educators at major universities and planetariums.
WorldWide Telescope will surely be compared with Google Sky but the ability to build a custom multimedia planetarium show sets it apart from Google’s tool. Users will actually be able to use the Microsoft program to create their own space tours, and share them with their friends. Hmmm….do I sense a space race here?
Microsoft said it is offering the resource for free in memory of Jim Gray, the Microsoft researcher who disappeared last year while sailing to the Farallon Islands, off the coast of San Francisco. The project is an extension of Gray's work, which included the development of large-scale, high-performance online databases.
A test version of the software is available for download.
-- Michelle Savage
Computer users now play astronomer, thanks to WorldWide Telescope by Microsoft. The free, virtual service combines images and databases from every major telescope and astronomical organization in the world, allowing users to take a virtual tour of the night sky.
The WorldWide Telescope stitches together terabytes of high-resolution images of celestial bodies from a variety of sources, including the Hubble Space Telescope, the Chandra X-Ray Observatory Center, and the Spitzer Space Telescope. It then displays them in a way that replicates their actual position in the sky.
Through a video game-like experience, users can freely browse through the solar system, galaxy and beyond, or take guided tours of the sky hosted by astronomers and educators at major universities and planetariums.
WorldWide Telescope will surely be compared with Google Sky but the ability to build a custom multimedia planetarium show sets it apart from Google’s tool. Users will actually be able to use the Microsoft program to create their own space tours, and share them with their friends. Hmmm….do I sense a space race here?
Microsoft said it is offering the resource for free in memory of Jim Gray, the Microsoft researcher who disappeared last year while sailing to the Farallon Islands, off the coast of San Francisco. The project is an extension of Gray's work, which included the development of large-scale, high-performance online databases.
A test version of the software is available for download.
-- Michelle Savage
How Do You Know Who's Who?
Today's edition of the Long Island daily newspaper Newsday carried the details of a tragic story, in which a police officer – who had pulled a suspected drunk driver over to the side of the road – was critically injured when his car was plowed into by another suspected drunk driver – who didn't even have a valid New York State driver's license.
The man who hit the police car, 27-year-old Rahiem Griffin of Shirley, N.Y., was driving with a suspended New York State license, but had a license issued from the state of New Jersey -- which, coincidentally, was suspended in March of this year for violations relating to an unpaid parking ticket. New York authorities say Griffin "beat the system" by obtaining that license, which they say never should have been issued because New York and New Jersey have a reciprocal arrangement: If your license is suspended in one state, you won't get a license in the other.
So, how did Griffin get his New Jersey license? Police and motor vehicle officials say when Griffin applied for the New Jersey license, he simply dropped a middle initial. A search of state motor vehicle records did not find any problems with Rahiem Griffin – no middle initial -- and so the license was issued. New Jersey officials also use the National Driver Register, a federal database of drivers with suspended and/or revoked licenses, and again, no record of Rahiem Griffin was found.
In this day and age, it's baffling to me how our databases still are not fully integrated, and able to understand – or even infer, or flag – that Rahiem A. Griffin and Rahiem Griffin might be the same person.
Earlier this year, I interviewed Stef Damianakis, CEO of a company called Netrics, which has created a data-matching engine that models the human concept of similarity. Thus, when Thomas Smith is entered into a database in New York, and Tom B. Smythe in entered into a database in New Jersey, the engine will return them together, enabling further scrutiny by the person who made the query.
The issue is not simply about drunk driving. False, or inexact, identifications result in guns being sold to people with criminal histories, and in foreign criminals getting passports to perform acts of global terror, to mention but two frightening scenarios.
Research and advances in the way information is stored, recalled, sorted and logically connected should be among the highest priorities for governments around the world, if they're sincere in their efforts to protect and defend their citizens.
-- David Rubinstein
The man who hit the police car, 27-year-old Rahiem Griffin of Shirley, N.Y., was driving with a suspended New York State license, but had a license issued from the state of New Jersey -- which, coincidentally, was suspended in March of this year for violations relating to an unpaid parking ticket. New York authorities say Griffin "beat the system" by obtaining that license, which they say never should have been issued because New York and New Jersey have a reciprocal arrangement: If your license is suspended in one state, you won't get a license in the other.
So, how did Griffin get his New Jersey license? Police and motor vehicle officials say when Griffin applied for the New Jersey license, he simply dropped a middle initial. A search of state motor vehicle records did not find any problems with Rahiem Griffin – no middle initial -- and so the license was issued. New Jersey officials also use the National Driver Register, a federal database of drivers with suspended and/or revoked licenses, and again, no record of Rahiem Griffin was found.
In this day and age, it's baffling to me how our databases still are not fully integrated, and able to understand – or even infer, or flag – that Rahiem A. Griffin and Rahiem Griffin might be the same person.
Earlier this year, I interviewed Stef Damianakis, CEO of a company called Netrics, which has created a data-matching engine that models the human concept of similarity. Thus, when Thomas Smith is entered into a database in New York, and Tom B. Smythe in entered into a database in New Jersey, the engine will return them together, enabling further scrutiny by the person who made the query.
The issue is not simply about drunk driving. False, or inexact, identifications result in guns being sold to people with criminal histories, and in foreign criminals getting passports to perform acts of global terror, to mention but two frightening scenarios.
Research and advances in the way information is stored, recalled, sorted and logically connected should be among the highest priorities for governments around the world, if they're sincere in their efforts to protect and defend their citizens.
-- David Rubinstein
Thursday, May 15, 2008
Suppressing Complexity in Software-Oriented Architectures
Ways to suppress complexity in software-oriented architectures was the topic of the SOA Governance Summit held by Software AG in New York City on May 14.
Software AG executives discussed three main things to remember when dealing with SOA: focus on the organization’s capabilities, decouple providers from consumers, and have end-to-end visibility.
Organizations should think of SOA environments in terms of their own capabilities, and not product categories, Software AG executives said. They also pointed out four capabilities organizations should have: service enablement, which allows the creation of new services from existing applications, service orchestration, service mediation, which helps consumers and providers find each other, and service management.
“If you only take one thing from this conference, it should be decoupling providers from consumers,” Jignesh Shah, Software AG’s senior director of SOA product management, told attendees in the Marriot Marquis in Times Square.
Shah said that providers should be decoupled from consumers from the get-go. This is important to do because quality of service, implementation technologies and functional requirements will change over time, and it is important to have the ability to evolve. Decoupling the parts will provide a “shield” against potential problems when those changes occur.
Miko Matsumura, vice president and deputy CTO of Software AG, spoke about a new SOA paradigm that deals with the “management of constraint.” He said IT people are sensitive to capacity and there are new ways of sharing capacity.
These ways include the service concept, with a single service being used for multiple use cases. With a business service, an organization can have two or more different processes. The other way, he said, is virtualization, where one or more boxes can occupy a single physical body.
“From the perspective of people in the infrastructure management business, I think the thing to appreciate is that we’re moving into an era of non-linear utilization,” Matsumura said. “One of the properties of SOA that creates this is the whole reuse and dependency model, which is this notion that a single service gets reused by other services. If user A increases their use of the service, and then you have user B increasing their use of the same service, then you start adding users, the curve of utilization is no longer linear.”
--Jeff Feinman
Software AG executives discussed three main things to remember when dealing with SOA: focus on the organization’s capabilities, decouple providers from consumers, and have end-to-end visibility.
Organizations should think of SOA environments in terms of their own capabilities, and not product categories, Software AG executives said. They also pointed out four capabilities organizations should have: service enablement, which allows the creation of new services from existing applications, service orchestration, service mediation, which helps consumers and providers find each other, and service management.
“If you only take one thing from this conference, it should be decoupling providers from consumers,” Jignesh Shah, Software AG’s senior director of SOA product management, told attendees in the Marriot Marquis in Times Square.
Shah said that providers should be decoupled from consumers from the get-go. This is important to do because quality of service, implementation technologies and functional requirements will change over time, and it is important to have the ability to evolve. Decoupling the parts will provide a “shield” against potential problems when those changes occur.
Miko Matsumura, vice president and deputy CTO of Software AG, spoke about a new SOA paradigm that deals with the “management of constraint.” He said IT people are sensitive to capacity and there are new ways of sharing capacity.
These ways include the service concept, with a single service being used for multiple use cases. With a business service, an organization can have two or more different processes. The other way, he said, is virtualization, where one or more boxes can occupy a single physical body.
“From the perspective of people in the infrastructure management business, I think the thing to appreciate is that we’re moving into an era of non-linear utilization,” Matsumura said. “One of the properties of SOA that creates this is the whole reuse and dependency model, which is this notion that a single service gets reused by other services. If user A increases their use of the service, and then you have user B increasing their use of the same service, then you start adding users, the curve of utilization is no longer linear.”
--Jeff Feinman
Tuesday, May 13, 2008
HP Is Buying EDS, aka, "HP Global Services," for US$13.9 Billion
Big Blue's biggest weapon has long been its services arm. As the saying goes, when you buy enterprise "solutions" from IBM, the bulk of the sale is the van full of services folks with packed suitcases, ready to move into your office for good. It’s a simple, yet Faustian, bargain: You give them all your money, and they take care of everything. Forever.
IBM Global Services differentiates IBM from, say, Microsoft, which sells its software through the channel, leaving the lucrative services business for partners. In a few cases, as with its Avenade joint venture with Accenture, Microsoft does capture some services revenue, but otherwise, Microsoft doesn't play in that world.
Hewlett-Packard is another Big Blue competitor that just doesn't measure up when it comes to services and service revenue. Eight years ago, HP almost bought Pricewaterhouse Coopers, but the deal fell though. (IBM snapped up PwC a couple of years later.)
Today, HP is ready to try again. When the word first slipped out on Monday that HP was negotiating to buy Electronic Data System, EDS’ shares soared 28% on the news, while HP’s fell by 5%. That tells you what Wall Street thinks of this.
I share Wall Street’s skepticism. This doesn't seem like a good deal. But then again, I'm skeptical about HP's ability to full advantage of large acquisitions. HP never achieved the value it could have from the Compaq fiasco. The jury is still out as to whether HP and Mercury Interactive are better off a single company. Certainly, Mercury's competitors remain delighted about that acquisition — and are profiting by the amount of business they picked up because of it.
With this deal, much comes down to execution. It’s clear that HP chief honcho Mark Hurd is better at execution than his predecessor, Carly Fiorina. Let’s see if he can pull this one off.
— Alan Zeichick
IBM Global Services differentiates IBM from, say, Microsoft, which sells its software through the channel, leaving the lucrative services business for partners. In a few cases, as with its Avenade joint venture with Accenture, Microsoft does capture some services revenue, but otherwise, Microsoft doesn't play in that world.
Hewlett-Packard is another Big Blue competitor that just doesn't measure up when it comes to services and service revenue. Eight years ago, HP almost bought Pricewaterhouse Coopers, but the deal fell though. (IBM snapped up PwC a couple of years later.)
Today, HP is ready to try again. When the word first slipped out on Monday that HP was negotiating to buy Electronic Data System, EDS’ shares soared 28% on the news, while HP’s fell by 5%. That tells you what Wall Street thinks of this.
I share Wall Street’s skepticism. This doesn't seem like a good deal. But then again, I'm skeptical about HP's ability to full advantage of large acquisitions. HP never achieved the value it could have from the Compaq fiasco. The jury is still out as to whether HP and Mercury Interactive are better off a single company. Certainly, Mercury's competitors remain delighted about that acquisition — and are profiting by the amount of business they picked up because of it.
With this deal, much comes down to execution. It’s clear that HP chief honcho Mark Hurd is better at execution than his predecessor, Carly Fiorina. Let’s see if he can pull this one off.
— Alan Zeichick
Friday, May 9, 2008
At Sun, It's Chips Ahoy!
Sun wasn't too chatty about its latest acquisition. When the company snapped up the remaining IP from fabless chip startup Montalvo Systems in late April, nary a press release was issued, nor a blog written. But despite Sun's large-scale acquisition of MySQL earlier this year, there's a very significant chance that the Montalvo purchase will mean more for Sun in the long run than its grasping of the database leader.
Montalvo was a complete debacle, from start to finish. The company rose out of the ashes of famous money-sink Transmeta, a company so flush with cash, for a time, that it counted Linus Torvalds among its employees. Transmeta was legendary for its low-power chips with radically different energy management ideas. But Transmeta was plagued from the start by the ridiculous capital expenditures needed to launch a new consumer processor. After releasing a few chips to market, which made their way into some unique, compact and expensive laptops, Transmeta wandered off into the sunset with an Intel lawsuit in tow.
While Transmeta isn't entirely gone, its hopes and dreams are, basically, dashed to pieces at this point. Thus, a portion of the company's management left a few years back to form Montalvo Systems. Montalvo hoped to build low-power x86 chips, and to do so in India with rented time on ultra-violet laser etchers. The idea was to remain fabless, and as such, the company only needed a relatively small amount of capital to float. Or, that was the theory.
But after three years and over US$70 million spent, Montalvo was a failure. Enter Sun, in April, with what is said to be a pocket full of change. For a song, Sun snapped up Montalvo's IP, and ostensibly, some of its brainy processor architects.
So when I met with a room full of Sun and Intel spokespeople on Thursday, there to discuss their happy, huggy relationship, my first question was naturally related to Montalvo. Was Sun going to be producing its own low-power x86 chips?
The official word was, “what?” Sun's representative was not aware of the acquisition. Intel's multiple spokespeople, however, were. The resounding reply from them was, no comment.
Of course, there should have been a third party in the room, as well: AMD. It's understandable that representatives from that company would frown upon sharing a room with Intel, but the question would have been no less relevant there.
Is Sun preparing to move into the x86 chip market? I'm going to speculate here, something I'm not really supposed to do, as a journalist. But, heck, none of the parties involved want to contemplate the possible answer, so I'm the only one in the room who can. Is Sun hoping for low-power x86 chips?
Unequivocally: yes. But I'll add a caveat here; Sun's interest in the desktop is non-existent. For the server market, there's also no use for this chip. Where this low-power x86 chip could be most useful, however, is in cell phones and mobile devices. Sun just can't help crowing about its mobile aspirations of late, a fact which is obvious when you see all the JavaFX stuff they've been showing off at JavaOne this year.
Mix this with the company's last IP firesale acquisition, Savaje, which created a Java operating system for mobile phones, and you've got a recipe for a full-scale mobile phone platform, designed, produced and programmed entirely by Sun's big-brained engineering teams.
Perhaps the real question I should have asked is, “Can Sun actually deliver such a product to the consumer marketplace?” My initial reaction to this question is... No. Probably not.
--Alex Handy
Montalvo was a complete debacle, from start to finish. The company rose out of the ashes of famous money-sink Transmeta, a company so flush with cash, for a time, that it counted Linus Torvalds among its employees. Transmeta was legendary for its low-power chips with radically different energy management ideas. But Transmeta was plagued from the start by the ridiculous capital expenditures needed to launch a new consumer processor. After releasing a few chips to market, which made their way into some unique, compact and expensive laptops, Transmeta wandered off into the sunset with an Intel lawsuit in tow.
While Transmeta isn't entirely gone, its hopes and dreams are, basically, dashed to pieces at this point. Thus, a portion of the company's management left a few years back to form Montalvo Systems. Montalvo hoped to build low-power x86 chips, and to do so in India with rented time on ultra-violet laser etchers. The idea was to remain fabless, and as such, the company only needed a relatively small amount of capital to float. Or, that was the theory.
But after three years and over US$70 million spent, Montalvo was a failure. Enter Sun, in April, with what is said to be a pocket full of change. For a song, Sun snapped up Montalvo's IP, and ostensibly, some of its brainy processor architects.
So when I met with a room full of Sun and Intel spokespeople on Thursday, there to discuss their happy, huggy relationship, my first question was naturally related to Montalvo. Was Sun going to be producing its own low-power x86 chips?
The official word was, “what?” Sun's representative was not aware of the acquisition. Intel's multiple spokespeople, however, were. The resounding reply from them was, no comment.
Of course, there should have been a third party in the room, as well: AMD. It's understandable that representatives from that company would frown upon sharing a room with Intel, but the question would have been no less relevant there.
Is Sun preparing to move into the x86 chip market? I'm going to speculate here, something I'm not really supposed to do, as a journalist. But, heck, none of the parties involved want to contemplate the possible answer, so I'm the only one in the room who can. Is Sun hoping for low-power x86 chips?
Unequivocally: yes. But I'll add a caveat here; Sun's interest in the desktop is non-existent. For the server market, there's also no use for this chip. Where this low-power x86 chip could be most useful, however, is in cell phones and mobile devices. Sun just can't help crowing about its mobile aspirations of late, a fact which is obvious when you see all the JavaFX stuff they've been showing off at JavaOne this year.
Mix this with the company's last IP firesale acquisition, Savaje, which created a Java operating system for mobile phones, and you've got a recipe for a full-scale mobile phone platform, designed, produced and programmed entirely by Sun's big-brained engineering teams.
Perhaps the real question I should have asked is, “Can Sun actually deliver such a product to the consumer marketplace?” My initial reaction to this question is... No. Probably not.
--Alex Handy
Thursday, May 8, 2008
Peter Gabriel's Been Shut Down
Pop singer Peter Gabriel's Web site has been shut down since Monday after an undisclosed number of servers were stolen from his hosting provider’s data center.
Gabriel’s site doesn’t give a whole lot of information about what exactly happened. It simply reads: "We'll Be Back Soon - apologies for the lack of service. Real World, Peter Gabriel and WOMAD web services are currently off-line. Our servers were stolen from our ISP's data centre on Sunday night - Monday morning. We are working on restoring normal service as soon as possible." (There's no evidence of a sledgehammer being used during the break-in).
However, a little sleuthing reveals that the victimized data center appears to belong to Rednet Ltd, a bankrupt subsidiary of Opal Telecom, a Web hosting service provider.
Without the server housing his data, who knows how long Gabriel will remain without a Web site? But the real question is “how did these thieves get away with robbing a data center?”
I’ve had the opportunity to visit a few data centers. Each had such high security, I’d take my chances robbing Fort Knox before attempting to steal something from one of these data centers. Data centers are typically housed at confidential, undisclosed locations that are protected with armed personnel around the clock. In case that’s not enough, entry protection tools, such as biometric devices and secure token cards, are used to control and audit access. Did Opal Telecom forget to invest in these things?
Unless these thieves were invisible, it looks like Gabriel needs a better data center.
-- Michelle Savage
Gabriel’s site doesn’t give a whole lot of information about what exactly happened. It simply reads: "We'll Be Back Soon - apologies for the lack of service. Real World, Peter Gabriel and WOMAD web services are currently off-line. Our servers were stolen from our ISP's data centre on Sunday night - Monday morning. We are working on restoring normal service as soon as possible." (There's no evidence of a sledgehammer being used during the break-in).
However, a little sleuthing reveals that the victimized data center appears to belong to Rednet Ltd, a bankrupt subsidiary of Opal Telecom, a Web hosting service provider.
Without the server housing his data, who knows how long Gabriel will remain without a Web site? But the real question is “how did these thieves get away with robbing a data center?”
I’ve had the opportunity to visit a few data centers. Each had such high security, I’d take my chances robbing Fort Knox before attempting to steal something from one of these data centers. Data centers are typically housed at confidential, undisclosed locations that are protected with armed personnel around the clock. In case that’s not enough, entry protection tools, such as biometric devices and secure token cards, are used to control and audit access. Did Opal Telecom forget to invest in these things?
Unless these thieves were invisible, it looks like Gabriel needs a better data center.
-- Michelle Savage
Wednesday, May 7, 2008
Debianized OpenSolaris Arrives: Don't listen to Ian Murdock, Debianization is a good thing
Many years ago, I was looking at the various types of Linux and wondering which version to install. A friend of mine threw a Debian install CD at me and said something to the effect of “There is no other Linux.” After installing and setting up my desktop, I, too, was convinced. And while, today, Debian's many benefits are available in their own forms from other operating systems, there's still a lot that sets it apart from the crowd.
Unfortunately, one of the largest things that sets Debian apart is its contingent of developers, some of whom could be graciously categorized as the “Fat and Sassy” variety. Certainly, there are numerous “Fat and Sassy” types in the Linux world, and not all of them are Debian lovers. But it always seems to be Debian at the end of whatever the latest elitist argument is around the operating system.
Take, for example, the issues patched in OpenSSH 5.0. Just days after the 4.9 release picked up a large number of bugs and added some new features, the developers behind the project had to rush out and build version 5.0. The reason, they claimed, was that someone had found a way to hijack X11 tunneled sessions, but only submitted the bug to the Debian team. And the Debian team didn't pass this bug over until after OpenSSH hit 4.9.
Now, in the world of exploit reporting, there is always a large amount of fear, uncertainty and doubt. And I find it highly unlikely that someone found a potential attack vector on OpenSSH, and then only reported it to Debian. As we all know, finding an exploitable bug in OpenSSH is basically a ticket to a six-figure salary at any of a hundred security consulting firms.
And yet, I can't help but think that the poor OpenSSH team was right in blaming Debian. It's a very insular community, and I've even heard the occasional gripe from within the Debian lists about Ubuntu, which has arguably become Debian's saving grace in recent years.
Anyway, this is a long-winded way of getting to news that Sun's Project Indiana is now complete. Ian Murdock started the project when he joined Sun early last year, and from day one, I knew it was an effort to Debianize Solaris. Murdock would disagree, and likely argue that the efforts in Project Indiana are focused on making OpenSolaris more accessible to Linux users, and more performant when it comes time to upgrade. But when you get right down to it, Debianization is just what Solaris has needed. Debian is, at the same time, the most geek-friendly and the simplest to use Linux. Others, like Gentoo and Ubuntu, have come along and improved upon the Debian model, but when you get right down to it, almost all of the modern packaging systems in Linux are an attempt to copy apt-get.
And now, OpenSolaris has its own apt-get. The image packaging system is certainly more advanced than apt-get, and it's not as mature, but it's apt-get none the less. Now, OpenSolaris users can type a simple command to install all of the components, binaries and libraries they need to run a given piece of software. The endless chase for dependencies is no more. And with that easy-to-use, yet highly complex change, OpenSolaris has turned the corner from niche Unix to viable Linux alternative.
Congratulations, Ian. I know you'll be upset to see me compare the two operating systems, but you just need to remember: The people behind a project can sometimes become harder to deal with than the code. But that doesn't mean the ideas behind the code, or even the people, are bad. In fact, it's the best reason there is to run off and start from scratch.
--Alex Handy
Unfortunately, one of the largest things that sets Debian apart is its contingent of developers, some of whom could be graciously categorized as the “Fat and Sassy” variety. Certainly, there are numerous “Fat and Sassy” types in the Linux world, and not all of them are Debian lovers. But it always seems to be Debian at the end of whatever the latest elitist argument is around the operating system.
Take, for example, the issues patched in OpenSSH 5.0. Just days after the 4.9 release picked up a large number of bugs and added some new features, the developers behind the project had to rush out and build version 5.0. The reason, they claimed, was that someone had found a way to hijack X11 tunneled sessions, but only submitted the bug to the Debian team. And the Debian team didn't pass this bug over until after OpenSSH hit 4.9.
Now, in the world of exploit reporting, there is always a large amount of fear, uncertainty and doubt. And I find it highly unlikely that someone found a potential attack vector on OpenSSH, and then only reported it to Debian. As we all know, finding an exploitable bug in OpenSSH is basically a ticket to a six-figure salary at any of a hundred security consulting firms.
And yet, I can't help but think that the poor OpenSSH team was right in blaming Debian. It's a very insular community, and I've even heard the occasional gripe from within the Debian lists about Ubuntu, which has arguably become Debian's saving grace in recent years.
Anyway, this is a long-winded way of getting to news that Sun's Project Indiana is now complete. Ian Murdock started the project when he joined Sun early last year, and from day one, I knew it was an effort to Debianize Solaris. Murdock would disagree, and likely argue that the efforts in Project Indiana are focused on making OpenSolaris more accessible to Linux users, and more performant when it comes time to upgrade. But when you get right down to it, Debianization is just what Solaris has needed. Debian is, at the same time, the most geek-friendly and the simplest to use Linux. Others, like Gentoo and Ubuntu, have come along and improved upon the Debian model, but when you get right down to it, almost all of the modern packaging systems in Linux are an attempt to copy apt-get.
And now, OpenSolaris has its own apt-get. The image packaging system is certainly more advanced than apt-get, and it's not as mature, but it's apt-get none the less. Now, OpenSolaris users can type a simple command to install all of the components, binaries and libraries they need to run a given piece of software. The endless chase for dependencies is no more. And with that easy-to-use, yet highly complex change, OpenSolaris has turned the corner from niche Unix to viable Linux alternative.
Congratulations, Ian. I know you'll be upset to see me compare the two operating systems, but you just need to remember: The people behind a project can sometimes become harder to deal with than the code. But that doesn't mean the ideas behind the code, or even the people, are bad. In fact, it's the best reason there is to run off and start from scratch.
--Alex Handy
Thursday, May 1, 2008
Photosynth Takes a Star Turn
Microsoft's Photosynth software was a star in last night's episode of "CSI: NY," a popular TV crime series. The technology creates a collection of two-dimensional images in a three-dimensional environment, allowing people to quickly zoom around to view different details.
In the episode, the CSI detectives investigated the murder of a high school guidance counselor, who is found during a school dance with his face melted off. Through the science of Photosynth, the team finds that the killer is a student, who turns out to be a thirty-something rather than a teenager.
So how did Photosynth save the day? The software allowed the detectives to stitch together images (taken on cell phones by students at the dance) and create a three-dimensional map of the high school gym, to re-create the scene of the crime.
As the product is not yet publicly available, last night was the first opportunity for most people to see how the software works. And apparently Microsoft didn’t have to pay a dime for this coverage.
Microsoft reps said that the company did not pay to have Photosynth featured on CSI. But few can argue that Microsoft will largely benefit from this product placement on one of the most popular shows existing today.
During the show, one detective marvels at the clarity of the images on Photosynth. Several other characters refer to the product by name in different scenes. And, last but not least, one of the detectives, near the end of the show, provides one of the greatest Microsoft plugs of all time when he announces: "It's Microsoft's world, kid. I'm just living in it." We could be looking at a whole new world of product placement here.
-- Michelle Savage
In the episode, the CSI detectives investigated the murder of a high school guidance counselor, who is found during a school dance with his face melted off. Through the science of Photosynth, the team finds that the killer is a student, who turns out to be a thirty-something rather than a teenager.
So how did Photosynth save the day? The software allowed the detectives to stitch together images (taken on cell phones by students at the dance) and create a three-dimensional map of the high school gym, to re-create the scene of the crime.
As the product is not yet publicly available, last night was the first opportunity for most people to see how the software works. And apparently Microsoft didn’t have to pay a dime for this coverage.
Microsoft reps said that the company did not pay to have Photosynth featured on CSI. But few can argue that Microsoft will largely benefit from this product placement on one of the most popular shows existing today.
During the show, one detective marvels at the clarity of the images on Photosynth. Several other characters refer to the product by name in different scenes. And, last but not least, one of the detectives, near the end of the show, provides one of the greatest Microsoft plugs of all time when he announces: "It's Microsoft's world, kid. I'm just living in it." We could be looking at a whole new world of product placement here.
-- Michelle Savage
Let the Sun Shine In
Sunshine on my face. Fresh air in my lungs. Finally, I’ve finished with my meetings at Interop here in Las Vegas, and I get a chance – albeit a short one – to sit outside for a few minutes and enjoy the weather.
Las Vegas, as you know, is designed to keep you inside – in the hotel restaurants, bars and shows, but mostly, in the casino. Since I arrived here on Monday, I’ve been inside, inhaling more second-hand smoke than I'd expect to breathe at a Catskills Mah Jongg tournament. (Three bam. Hack. Wheeze. Soap.) With the exception of the cab ride I took to get from the Interop show to Microsoft’s Management Summit in the Venetian Hotel, I was strictly behind closed doors.
The Venetian understands this longing to be outside. On the man-made piazza inside the hotel, restaurants offer dining inside or “outside.” If you choose outside, you’re in the middle of this piazza, with very high ceilings painted like the sky at sunset, and the lighting provides the feeling of day coming to a close. But even that gives an uneasy feeling after a while. You’re waiting for the sun to go all the way down, but it doesn’t. It’s perpetual pre-dusk.
Interop was a huge event. Some 350 exhibitors on the show floor, many interesting sessions – and one or two that I attended that were less engaging. A session on data center standardization that I expected to get specific about SNMP, 802.11N and other protocols instead was a vague talk about one company’s effort to get its multiple data centers on the same page in terms of tools, job responsibilities and management.
People were talking about virtualization, security, networking, storage, appliances and devices, telecommunications. After three full days of meetings and sessions, my head is spinning. But good news is at hand. The pool waitress has arrived, with frozen cocktails to help with the wind-down. Cheers!
-- David Rubinstein
Las Vegas, as you know, is designed to keep you inside – in the hotel restaurants, bars and shows, but mostly, in the casino. Since I arrived here on Monday, I’ve been inside, inhaling more second-hand smoke than I'd expect to breathe at a Catskills Mah Jongg tournament. (Three bam. Hack. Wheeze. Soap.) With the exception of the cab ride I took to get from the Interop show to Microsoft’s Management Summit in the Venetian Hotel, I was strictly behind closed doors.
The Venetian understands this longing to be outside. On the man-made piazza inside the hotel, restaurants offer dining inside or “outside.” If you choose outside, you’re in the middle of this piazza, with very high ceilings painted like the sky at sunset, and the lighting provides the feeling of day coming to a close. But even that gives an uneasy feeling after a while. You’re waiting for the sun to go all the way down, but it doesn’t. It’s perpetual pre-dusk.
Interop was a huge event. Some 350 exhibitors on the show floor, many interesting sessions – and one or two that I attended that were less engaging. A session on data center standardization that I expected to get specific about SNMP, 802.11N and other protocols instead was a vague talk about one company’s effort to get its multiple data centers on the same page in terms of tools, job responsibilities and management.
People were talking about virtualization, security, networking, storage, appliances and devices, telecommunications. After three full days of meetings and sessions, my head is spinning. But good news is at hand. The pool waitress has arrived, with frozen cocktails to help with the wind-down. Cheers!
-- David Rubinstein
Wednesday, April 30, 2008
Examining the Industry's Scat
Like a biologist picking through owl pellets, the best way to figure out what’s going on with an entity is to examine its leavings. That’s why I try to go to the ACCRC at least once a week. The Alameda County Computer Resource Center is a non-profit computer and electronics recycler in Berkeley, Calif. There, they reformat old desktops, install Ubuntu and then donate the machine to someone in need. Of course, it doesn’t hurt that my wife works there, too.
My weekly visits to this den of IT leavings has taught me a few things about the current state of enterprise IT. First, I’ve learned that cathode ray tube monitors are finally becoming the exception, rather than the rule. For a long time, these beasts would make up the mainstay of donations, but recently they’ve become somewhat less common. Old CRT televisions and CCTV monitors, however, are still plentiful in the wild.
Laptops have become so prevalent that they are frequently appearing without any molestation. In the past, laptops donated for recycling have typically been stripped of RAM, hard drives and often their screens. But these days, most laptops arrive fully intact, a sign that the precious pieces inside are no longer at a premium.
DLT tapes are also a common sight at the center. It would appear that they have become somewhat passe in enterprise backup systems. There is irony to be had here, however: despite the wealth of backup media donated, actual hard drive and RAID case donations are way down. It would appear that the need for cold backups isn’t as great as the need for online systems with warmer storage. All data, accessible all the time, in other words.
Perhaps the greatest lesson I have learned from my visits to the ACCRC, however, is the lesson of waste. It never ceases to amaze me how much equipment arrives in its original packaging, unopened, unused and unneeded. If I was able to track these items back to their origins (something that is next to impossible at this massive warehouse), I bet I’d find someone who was either fired, a newbie or was looking for another job behind these donations. I like to think that the experienced and caring managers and buyers out there tend to purchase only what they’ll need. Whereas the less experienced tend to buy in round numbers with huge smudge factors. Honestly, there’s never going to be a need for 10,000 individually wrapped 3-foot ethernet cables in any enterprise. There will always be a need for 30,000-foot spools of cable, however, which can easily be cut into any size or length needed.
Unless you want to feed the ACCRC more fresh gear, I’d recommend calculating and re-calculating your buying numbers when purchasing equipment. Remember, like a salad bar, you can always go back and fill your cup again when you’ve finished what you’ve got. But you can never put the cottage cheese back in the tub if you don’t finish it.
--Alex Handy
My weekly visits to this den of IT leavings has taught me a few things about the current state of enterprise IT. First, I’ve learned that cathode ray tube monitors are finally becoming the exception, rather than the rule. For a long time, these beasts would make up the mainstay of donations, but recently they’ve become somewhat less common. Old CRT televisions and CCTV monitors, however, are still plentiful in the wild.
Laptops have become so prevalent that they are frequently appearing without any molestation. In the past, laptops donated for recycling have typically been stripped of RAM, hard drives and often their screens. But these days, most laptops arrive fully intact, a sign that the precious pieces inside are no longer at a premium.
DLT tapes are also a common sight at the center. It would appear that they have become somewhat passe in enterprise backup systems. There is irony to be had here, however: despite the wealth of backup media donated, actual hard drive and RAID case donations are way down. It would appear that the need for cold backups isn’t as great as the need for online systems with warmer storage. All data, accessible all the time, in other words.
Perhaps the greatest lesson I have learned from my visits to the ACCRC, however, is the lesson of waste. It never ceases to amaze me how much equipment arrives in its original packaging, unopened, unused and unneeded. If I was able to track these items back to their origins (something that is next to impossible at this massive warehouse), I bet I’d find someone who was either fired, a newbie or was looking for another job behind these donations. I like to think that the experienced and caring managers and buyers out there tend to purchase only what they’ll need. Whereas the less experienced tend to buy in round numbers with huge smudge factors. Honestly, there’s never going to be a need for 10,000 individually wrapped 3-foot ethernet cables in any enterprise. There will always be a need for 30,000-foot spools of cable, however, which can easily be cut into any size or length needed.
Unless you want to feed the ACCRC more fresh gear, I’d recommend calculating and re-calculating your buying numbers when purchasing equipment. Remember, like a salad bar, you can always go back and fill your cup again when you’ve finished what you’ve got. But you can never put the cottage cheese back in the tub if you don’t finish it.
--Alex Handy
Subscribe to:
Posts (Atom)