Oracle issued a comprehensive list of its software that may or may not be impacted by the OpenSSL (secure sockets layer) vulnerability known as Heartbleed, while warning that no fixes are yet available for some likely affected products.
The list includes well over 100 products that appear to be in the clear, either because they never used the version of OpenSSL reported to be vulnerable to Heartbleed, or because they don’t use OpenSSL at all.
However, Oracle is still investigating whether another roughly 20 products, including MySQL Connector/C++, Oracle SOA Suite and Nimbula Director, are vulnerable.
Oracle determined that seven products are vulnerable and is offering fixes. These include Communications Operation Monitor, MySQL Enterprise Monitor, MySQL Enterprise Server 5.6, Oracle Communications Session Monitor, Oracle Linux 6, Oracle Mobile Security Suite and some Solaris 11.2 implementations.
Another 14 products are likely to be vulnerable, but Oracle doesn’t have fixes for them yet, according to the post. These include BlueKai, Java ME and MySQL Workbench.
Users of Oracle’s growing family of cloud services may also be able to breath easy. “It appears that both externally and internally (private) accessible applications hosted in Oracle Cloud Data Centers are currently not at risk from this vulnerability,” although Oracle continues to investigate, according to the post.
Heartbleed, which was revealed by researchers last week, can allow attackers who exploit it to steal information on systems thought to be protected by OpenSSL encryption. A fix for the vulnerable version of OpenSSL has been released and vendors and IT organizations are scrambling to patch their products and systems.
Observers consider Heartbleed one of the most serious Internet security vulnerabilities in recent times.
Meanwhile, this week Oracle also shipped 104 patches as part of its regular quarterly release.
The patch batch includes security fixes for Oracle database 11g and 12c, Fusion Middleware 11g and 12c, Fusion Applications, WebLogic Server and dozens of other products. Some 37 patches target Java SE alone.
A detailed rundown of the vulnerabilities’ relative severity has been posted to an official Oracle blog.
“I think you’ll see wide-area, high-bandwidth [smart]watches this year at some point,” said Glenn Lurie, president of emerging devices at AT&T, in an interview.
The company has a group working in Austin, Texas, on thousands of wearable-device prototypes, and is also looking at certifying third-party devices for use on its network, Lurie said.
“A majority of stuff you’re going to see today that’s truly wearable is going to be in a watch form factor to start,” Lurie said. If smartwatch use takes off — “and we believe it can,” Lurie said — then those devices could become hubs for wearable computing.
Right now smartwatches lack LTE capabilities, so they are largely reliant on smartphones for apps and notifications. With a mobile broadband connection, a smartwatch becomes an “independent device,” Lurie said.
“We’ve been very, very clear in our opinion that a wearable needs to be a stand-alone device,” Lurie said.
AT&T and Filip Technologies in January released the Filip child tracker wristwatch, which also allows a parent to call a child over AT&T’s network. Filip could be improved, but those are the kind of wearable products that AT&T wants to bring to market.
Wearables for home health care are also candidates for LTE connections, Lurie said, but fitness trackers may be too small for LTE connectivity, at least for now.
Lurie couldn’t say when smartglasses would be certified to work on AT&T’s network. Google last year said adding cellular capabilities to its Glass eyewear wasn’t in the plans because of battery use. But AT&T is willing to experiment with devices to see where LTE would fit.
“It’s one thing if I’m buying it to go out for a job, it’s another thing if I’m going to wear it everyday. Those are the things people are debating right now — how that’s all going to come out,” Lurie said. “There’s technology and there’s innovation happening, and those things will get solved.”
Lurie said battery issues are being resolved, but there are no network capacity issues. Wearable devices don’t use too much bandwidth as they relay short bursts of information, unless someone is, for instance, listening to Pandora radio on a smartwatch, Lurie said.
But AT&T is building out network capacity, adding Wi-Fi networks, and virtualizing networks to accommodate more devices.
“We don’t have network issues, we don’t have any capacity issues,” Lurie said. “The key element to adding these devices is a majority of [them] aren’t high-bandwidth devices.”
AT&T wants to make wearables work with its home offerings like the Digital Life home automation and security system. AT&T is also working with car makers for LTE integration, with wearables interacting with vehicles to open doors and start ignitions.
Canonical has announced its latest milestone server release, Ubuntu 14.04 LTS.
The company, which is better known for its open source Ubuntu Linux desktop operating system, has been supplying a server flavor of Ubuntu since 2006 that is being used by Netflix and Snapchat.
Ubuntu 14.04 Long Term Support (LTS) claims to be the most interoperable Openstack implementation, designed to run across multiple environments using Icehouse, the latest iteration of Openstack.
Canonical product manager Mark Baker told The INQUIRER, “The days of denying Ubuntu are over, and the cloud is where we can make a difference.”
Although Canonical regular issues incremental releases of Ubuntu, LTS releases such as this one represent landmarks for the operating system, which only come about ever two years. LTS releases are also supported for a full five years.
New in this Ubuntu 14.04 LTS release are Juju and Maas orchestration and automation tools and support for hyperscale ARM 64-bit computing such as the server setup recently announced by AMD.
Baker continued, “We’re not an enterprise vendor in the traditional sense. We’ve got a pretty good idea of how to do it by now. Openstack is gaining a more formal status as enterprise evolves to adopt cloud based solutions, and we are making a commitment to support it.
“Openstack Iceberg is also considered LTS and as such will be supported for five years.”
Scalability is another key factor. Baker said, “We look at performance. For the majority of our customers it’s about efficiency – how rapidly we can scale up and scale in, and that’s something Ubuntu does incredibly well.”
Ubuntu 14.04 LTS will be available to download from Thursday.
Explosive volcanic eruptions apparently shaped Mercury’s surface for billions of years — a surprising finding, given that until recently scientists had thought the phenomenon was impossible on the sun-scorched planet.
This discovery could shed new light on the origins of Mercury, investigators added.
On Earth, explosive volcanic eruptions can lead to catastrophic damage, such as when Mount St. Helens detonated in 1980 in the deadliest and most economically destructive volcanic event in U.S. history.
Explosive volcanism happens because Earth’s interior is rich in volatiles — water, carbon dioxide and other compounds that vaporize at relatively low temperatures. As molten rock rises from the depths toward Earth’s surface, volatiles dissolved within it vaporize and expand, increasing pressure so much that the crust above can burst like an overinflated balloon.
Mercury was long thought to be bone-dry when it came to volatiles. As such, researchers thought explosive volcanism could not happen there.
However, in 2008, after the initial flyby of Mercury by NASA’s MESSENGER spacecraft (short for MErcury Surface, Space ENvironment, GEochemistry, and Ranging), researchers found unusually bright reflective material dotting the planet’s surface.
This stuff appears to be pyroclastic ash, which is a sign of volcanic explosions. The large number of these deposits suggested that Mercury’s interior was not always devoid of volatiles, as scientists had long assumed.
It was unclear from MESSENGER’s first flybys over what time periods those explosions had occurred. Now scientists find Mercury’s volatiles did not escape in a rash of explosions early in the planet’s history. Instead, explosive volcanism apparently lasted for billions of years on Mercury.
Investigators analyzed 51 pyroclastic sites across Mercury’s surface using data from MESSENGER collected after the spacecraft began orbiting around the innermost planet in the solar system in 2011. These orbital readings provided a far more detailed view of the deposits and the vents that spewed them out compared with data from the initial flybys.
The orbital data revealed that some of the vents were much more eroded than others. This revealed the explosions did not all happen at the same time.
If the explosions did happen over a brief period and then stopped, “you’d expect all the vents to be degraded by approximately the same amount,” study lead author Timothy Goudge, a planetary scientist at Brown University, said in a statement. “We don’t see that; we see different degradation states. So the eruptions appear to have been taking place over an appreciable period of Mercury’s history.”
The researchers noted that about 90 percent of these ash deposits are located within craters formed by meteorite impacts. These deposits must have accumulated after each crater formed; if a deposit were laid down before a crater formed, it would have been destroyed by the impact that formed the crater.
Scientists can estimate the age of an impact crater by looking at how eroded its rims and walls are. Using that method, Goudge and his colleagues found that some pyroclastic deposits were found in craters ranging in age between 1 billion years to more than 4 billion years old. Explosive volcanic activity was thus not confined to a brief time after Mercury’s formation about 4.5 billion years ago, researchers said.
“The most surprising discovery was the range of ages over which these deposits appear to have formed, as this really has implications for how long Mercury retained volatiles in its interior,” Goudge told Space.com.
Earlier models of how Mercury formed suggested most of its volatiles would not have survived the planet-formation process. For instance, since Mercury has an unusually large iron core, past models posited that the planet might have once been much larger, but had its outer layers and its volatiles removed by a huge impact early in the planet’s history.
This scenario now seems unlikely given these new findings, in combination with other data collected by MESSENGER showing traces of the volatiles sulfur, potassium, and sodium on Mercury’s surface.
Future research will aim to identify more of these pyroclastic deposits and their source vents.
“More detailed observations and studies of single vents and associated deposits will elucidate some of the detailed aspects of what pyroclastic activity might have been like on Mercury,” Goudge said.
The scientists detailed their findings online March 28 in the Journal of Geophysical Research: Planets.
Its becoming more obvious lately that Intel and Microsoft are no longer joined at the hip. Intel is trying desperately to make a dent in the tablet market, and with Windows struggling on those devices, Android is where it’s at.
Intel hopes to see its processors used in 40 million tablets this year, and 80% to 90% of those will be running Google’s Android OS, CEO Brian Krzanich said on Tuesday.
“Our mix of OSes reflects pretty much what you see in the marketplace,” Krzanich said during Intel’s quarterly earnings call.
Most Intel-powered tablets running Android today use the older Medfield and Clover Trail+ chips. More Android tablets running the latest Atom processor, called Bay Trail, will ship later this quarter.
That’s not to say Intel is abandoning Windows — far from it. It’s just going where the market is today. Krzanich said he expects Windows to “grow and gain traction,” and more Intel-based tablets running both Android and Windows will be shown in June at the massive Computex trade show in Taipei.
The first Android-based Bay Trail tablet, the DreamTab, was announced in January, but it hasn’t shipped yet.
Intel is chasing ARM, the U.K. company whose processor designs are used in most tablets today, including those running both Android and Apple’s iOS.
The 40 million Intel tablets that will ship this year will give the company 15% to 20% of the tablet market, Intel CFO Stacy Smith said on the earnings call.
Intel is providing discounts and development funds to tablet makers to reduce the cost of using its chips. It’s looking for growth with the white-box Chinese tablet makers, which are expected to ship up to 130 million tablets this year.
Intel chips are available in some tablets now priced under $99, but most will be priced between $125 and $250, Krzanich said.
Microsoft hasn’t made much of a dent yet in Google’s and Apple’s share of the market, but IDC estimated last month that Windows would have 10.2% of the tablet market by 2017. Dell, Toshiba, Lenovo and Hewlett-Packard have launched Windows 8 tablets with Bay Trail, and Microsoft’s own Surface Pro 2 uses an Intel Core processor, but the tablets haven’t sold well.
“All spots in the Explorer Program have been claimed for now, but if you missed it this time, don’t worry,” the Google Glass team wrote on its blog on Wednesday.
“We’ll be trying new ways to expand the Explorer program in the future.”
Google did not respond to a request for more information, but an earlier post about the one-day sale spoke of brisk sales of the $1,500 Internet-enabled headset.
“We’ve sold out of Cotton (white), so things are moving really fast,” the team wrote.
Aside from the white version, Glass was being offered in shades marketed as Charcoal, Tangerine, Shale (grey) and Sky (blue). Buyers had the choice of their favorite shade or frame. Google announced the one-day sale available to all U.S. residents over 18 last week, adding it wasn’t ready to bring the gizmo to other countries. Shoppers who missed it have to sign up for updates at the Glass website.
Only a few thousand early adopters and developers had Glass before the one-day sale, which coincided with a major software update for the heads-up display that put video calling on hold.
An official launch of Google Glass may happen later this year.
The Red Hat Summit kicked off in San Francisco on Tuesday, and continued today with a raft of announcements.
Red Hat launched a new fork of Red Hat Enterprise Linux (RHEL) with the title “Atomic Host”. The new version is stripped down to enable lightweight deployment of software containers. Although the mainline edition also support software containers, this lightweight version improves portability.
This is part of a wider Red Hat initiative, Project Atomic, which also sees virtualisation platform Docker updated as part of the ongoing partnership between the two organisations.
Red Hat also announced a release candidate (RC) for Red Hat Enterprise Linux 7. The beta version has already been downloaded 10,000 times. The Atomic Host fork is included in the RC.
Topping all that is the news that Red Hat’s latest stable release, RHEL 6.5 has been deployed at the Organisation for European Nuclear Research – better known as CERN.
The European laboratory, which houses the Large Hadron Collider (LHC) and was birthplace of the World Wide Web has rolled out the latest versions of Red Hat Enterprise Linux, Red Hat Enterprise Virtualisation and Red Hat Technical Account Management. Although Red Hat has a long history with CERN, this has been a major rollout for the facility.
The logging server of the LHC is one of the areas covered by the rollout, as are the financial and human resources databases.
The infrastructure comprises a series of dual socket servers, virtualised on Dell Poweredge M610 servers with up to 256GB RAM per server and full redundancy to prevent the loss of mission critical data.
Niko Neufeld, deputy project leader at the Large Hadron Collider, said, “Our LHCb experiment requires a powerful, very reliable and highly available IT environment for controlling and monitoring our 70 million CHF detectors. Red Hat Enterprise Virtualization is at the core of our virtualized infrastructure and complies with our stringent requirements.”
Other news from the conference includes the launch of Openshift Marketplace, allowing customers to try solutions for cloud applications, and the release of Red Hat Jboss Fuse 6.1 and Red Hat Jboss A-MQ 6.1, which are standards based integration and messaging products designed to manage everything from cloud computing to the Internet of Things.
MediaTek has shown off one of its most interesting SoC designs to date at the China Electronic Information Expo. The MT6595 was announced a while ago, but this is apparently the first time MediaTek showcased it in action.
It is a big.LITTLE octa-core with integrated LTE support. It has four Cortex A17 cores backed by four Cortex A7 cores and it can hit 2.2GHz. The GPU of choice is the PowerVR G6200. It supports 2K4K video playback and recording, as well as H.265. It can deal with a 20-megapixel camera, too.
The really interesting bit is the modem. It can handle TD-LTE/FDD-LTE/WCDMA/TD-SCDMA/GSM networks, hence the company claims it is the first octa-core with on board LTE. Qualcomm has already announced an LTE-enabled octa-core, but it won’t be ready anytime soon. The MT6595 will – it is expected to show up in actual devices very soon.
Of course, MediaTek is going after a different market. Qualcomm is building the meanest possible chip with four 64-bit Cortex A57 cores and four A53 cores, while MediaTek is keeping the MT6595 somewhat simpler, with smaller 32-bit cores.
“We know you want features that allow you to move as seamlessly as possible between Office Online and the desktop,” wrote Kaberi Chowdhury, an Office Online technical product manager, in a blog post Monday.
Improvements to Excel Online include the ability to insert new comments, edit and delete existing comments, and properly open and edit spreadsheets that contain Visual Basic for Applications (VBA) code.
Meanwhile, Word Online has a new “pane” where users can see all comments in a document, and reply to them or mark them as completed. It also has a refined lists feature that is better able to recognize whether users are continuing a list or starting one. In addition, footnotes and end notes can now be added more conveniently inline.
PowerPoint Online has a revamped text editor that offers a layout view that more closely resembles the look of finished slides, according to Microsoft. It also has improved performance and video functionality, including the ability to play back embedded YouTube videos.
For users of OneNote Online, Microsoft is now adding the ability to print out the notes they’ve created with the application.
Microsoft is also making Word Online, PowerPoint Online and OneNote Online available via Google’s Chrome Web Store so that Chrome browser users can add them to their Chrome App launcher. Excel Online will be added later.
The improvements in Office Online will be rolled out to users this week, starting Monday.
Office Online, which used to be called Office Web Apps, competes directly against Google Docs and other browser-based office productivity suites. It’s meant to offer users a free, lightweight, Web-based version of these four applications if they don’t have the desktop editions on the device they’re using at that moment.
Mark Karpeles, the founder of Mt. Gox, has refused to come to the United States to answer questions about the Japanese bitcoin exchange’s U.S. bankruptcy case, Mt. Gox lawyers told a federal judge on Monday.
In the court filing, Mt. Gox lawyers cited a subpoena from the U.S. Department of Treasury’s Financial Crimes Enforcement Network, which has closely monitored virtualcurrencies like bitcoin.
“Mr. Karpeles is now in the process of obtaining counsel to represent him with respect to the FinCEN Subpoena. Until such time as counsel is retained and has an opportunity to ‘get up to speed’ and advise Mr. Karpeles, he is not willing to travel to the U.S.”, the filing said.
The subpoena requires Karpeles to appear and provide testimony in Washington, D.C., on Friday.
The court papers also said a Japanese court had been informed of the issue and that a hearing was scheduled on Tuesday in Japan.
Bitcoin is a digital currency that, unlike conventional money, is bought and sold on a peer-to-peer network independent of central control. Its value has soared in the last year, and the total worth of bit coins minted is now about $7 billion.
Mt. Gox, once the world’s biggest bitcoin exchange, filed for bankruptcy protection in Japan last month, saying it may have lost nearly half a billion dollars worth of the virtual coins due to hacking into its computer system.
According to Monday’s court filings, the subpoena did not specify topics for discussion.
In the court filings, Karpelès’ lawyers asked the court to delay the bankruptcy deposition to May 5, 2014 but said that Mt. Gox could not guarantee that Karpeles would attend that either.
Double Fine has warned indies of the dangers of devaluing their products, citing its new publishing initiative as a way of protecting against that outcome.
In an interview with USgamer, COO Justin Bailey expressed concern over the harmful side-effects of low price-points and deep discounting for indie games. By giving away too much for too little, he warned, indie developers could reach a similar situation as that found in the casual market.
“I think what indies really need to watch out for is not becoming the new casual games,” he said. “I don’t think that’s a problem from the development side. Indies are approaching it as an artform and they’re trying to be innovative, but what’s happening in the marketplace is indies are being pushed more and more to have a lower price or have a bunch of games bundled together.”
Double Fine is publishing MagicalTimeBean’s Escape Goat 2, the first occasion it has assisted another developer in that way, and it won’t be the last. According to Bailey, what seems to be a purely business decision on the surface has a strong altruistic undercurrent.
“Double Fine wants to keep indies premium. You see that in our own games and how we’re positioning them. We fight the urge to just completely drop the price. That’s one of the things we want to encourage in this program. Getting people to stick to a premium price point and to the platforms that allow you to do that.”
“We’re not looking to replace… we’re trying to augment the system,” he replies. “We’re making small strides right now. Costume Quest 2 is a high-budget game. It’s one that I thought it was best to have a publishing partner who can also spend some marketing funds around it.”
Double Fine is not the first developer to express concern over the tendency among indies to drastically lower prices.
In January, Jason Rohrer published an article imploring developers to consider the loyal fans who buy their games full-price only to see them on sale at a huge discount just a few weeks or months later. Last month, Positech Games’ Cliff Harris went further, suggesting that low price-points actually change the way players see and interact with the games they purchase.
The Intel Education 2-in-1 hybrid has a 10.1-inch screen that can detach from a keyboard base to turn into a tablet. Intel makes reference designs, which are then replicated by device makers and sold to educational institutions.
The 2-in-1 has a quad-core Intel Atom processor Z3740D, which is based on the Bay Trail architecture. The battery lasts about eight hours in tablet mode, and three more hours when docked with the keyboard base, which has a second battery.
Intel did not immediately return requests for comment on the estimated price for the hybrid or when it would become available.
Education is a hotly contested market among computer makers, as Apple pushes its iPads and MacBooks while PC makers like Dell, Hewlett-Packard and Lenovo hawk their Chromebooks.
Some features in the Intel 2-in-1 are drawn from the company’s Education tablets, which also run on Atom processors, but have the Android OS.
The 2-in-1 hybrid has front-facing and rear-facing cameras, and a snap-on magnification lens that allows students to examine items at a microscopic level.
The computer can withstand a drop of 70 centimeters, a feature added as protection for instances in which children mishandle laptops and let them fall. The keyboard base also has a handle.
The screen can be swiveled and placed on the keyboard, giving it the capability of a classic convertible laptop. This feature has been drawn from Intel’s Classmate series of education laptops.
The 2-in-1 has software intended to make learning easier, including tools for the arts and science. Intel’s Kno app provides access to 225,000 books. Typically, some of the books available via Kno are free, while others are fee-based.
Researchers last week warned they uncovered Heartbleed, a bug that targets the OpenSSL software commonly used to keep data secure, potentially allowing hackers to steal massive troves of information without leaving a trace.
Security experts initially told companies to focus on securing vulnerable websites, but have since warned about threats to technology used in data centers and on mobile devices running Google Inc’s Android software and Apple Inc’s iOS software.
Scott Totzke, BlackBerry senior vice president, told Reuters on Sunday that while the bulk of BlackBerry products do not use the vulnerable software, the company does need to update two widely used products: Secure Work Space corporate email and BBM messaging program for Android and iOS.
He said they are vulnerable to attacks by hackers if they gain access to those apps through either WiFi connections or carrier networks.
Still, he said, “The level of risk here is extremely small,” because BlackBerry’s security technology would make it difficult for a hacker to succeed in gaining data through an attack.
“It’s a very complex attack that has to be timed in a very small window,” he said, adding that it was safe to continue using those apps before an update is issued.
Google spokesman Christopher Katsaros declined comment. Officials with Apple could not be reached.
Security experts say that other mobile apps are also likely vulnerable because they use OpenSSL code.
Michael Shaulov, chief executive of Lacoon Mobile Security, said he suspects that apps that compete with BlackBerry in an area known as mobile device management are also susceptible to attack because they, too, typically use OpenSSL code.
He said mobile app developers have time to figure out which products are vulnerable and fix them.
“It will take the hackers a couple of weeks or even a month to move from ‘proof of concept’ to being able to exploit devices,” said Shaulov.
Technology firms and the U.S. government are taking the threat extremely seriously. Federal officials warned banks and other businesses on Friday to be on alert for hackers seeking to steal data exposed by the Heartbleed bug.
Companies including Cisco Systems Inc, Hewlett-Packard Co, International Business Machines Corp, Intel Corp, Juniper Networks Inc, Oracle Corp Red Hat Inc have warned customers they may be at risk. Some updates are out, while others, like BlackBerry, are rushing to get them ready.
With Amazon’s Fire TV device the first out the door, the second wave of microconsoles has just kicked off. Amazon’s device will be joined in reasonably short order by one from Google, with an app-capable update of the Apple TV device also likely in the works. Who else will join the party is unclear; Sony’s Vita TV, quietly soft-launched in Japan last year, remains a potentially fascinating contender if it had the right messaging and services behind it, but for now it’s out of the race. One thing seems certain, though; at least this time we’re actually going to have a party.
“Second wave”, you see, rather implies the existence of a first wave of microconsoles, but last time out the party was disappointing, to say the least. In fact, if you missed the first wave, don’t feel too bad; you’re in good company. Despite enthusiasm, Kickstarter dollars and lofty predictions, the first wave of microconsole devices tanked. Ouya, Gamestick and their ilk just turned out to be something few people actually wanted or needed. Somewhat dodgy controllers and weak selections of a sub-set of Android’s game library merely compounded the basic problem – they weren’t sufficiently cheap or appealing compared to the consoles reaching their end-of-life and armed with a vast back catalogue of excellent, cheap AAA software.
“The second wave microconsoles will enjoy all the advantages their predecessors did not. They’ll be backed by significant money, marketing and development effort, and will have a major presence at retail”
That was always the reality which deflated the most puffed-up “microconsoles will kill consoles” argument; the last wave of microconsoles sucked compared to consoles, not just for the core AAA gamer but for just about everyone else as well. Their hardware was poor, their controllers uncomfortable, their software libraries anaemic and their much-vaunted cost savings resulting from mobile game pricing rather than console game pricing tended to ignore the actual behaviour of non-core console gamers – who rarely buy day-one software and as a result get remarkably good value for money from their console gaming experiences. Comparing mobile game pricing or F2P models to $60 console games is a pretty dishonest exercise if you know perfectly well that most of the consumers you’re targeting wouldn’t dream of spending $60 on a console game, and never have to.
Why is the second wave of microconsoles going to be different? Three words: Amazon, Google, Apple. Perhaps Sony; perhaps even Samsung or Microsoft, if the wind blows the right direction for those firms (a Samsung microconsole, sold separately and also bundled into the firm’s TVs, as Sony will probably do with Vita TV in future Bravia televisions, would make particular sense). Every major player in the tech industry has a keen interest in controlling the channel through which media is consumed in the living room. Just as Sony and Microsoft originally entered the games business with a “trojan horse” strategy for controlling living rooms, Amazon and Google now recognise games as being a useful way to pursue the same objective. Thus, unlike the plucky but poorly conceived efforts of the small companies who launched the first wave of microconsoles, the second wave is backed by the most powerful tech giants in the world, whose titanic struggle with each other for control of the means of media distribution means their devices will have enormous backing.
To that end, Amazon has created its own game studios, focusing their efforts on the elusive mid-range between casual mobile games and core console games. Other microconsole vendors may take a different approach, creating schemes to appeal to third-party developers rather than building in-house studios (Apple, at least, is almost guaranteed to go down this path; Google could yet surprise us by pursuing in-house development for key exclusive titles). Either way, the investment in software will come. The second wave of microconsoles will not be “boxes that let you play phone games on your TV”; at least not entirely. Rather, they will enjoy dedicated software support from companies who understand that a hit exclusive game would be a powerful way to drive installed base and usage.
Moreover, this wave of microconsoles will enjoy significant retail support. Fire TV’s edge is obvious; Amazon is the world’s largest and most successful online retailer, and it will give Fire TV prime billing on its various sites. The power of being promoted strongly by Amazon is not to be underestimated. Kindle Fire devices may still be eclipsed by the astonishing strength of the iPad in the tablet market, but they’re effectively the only non-iPad devices in the running, in sales terms, largely because Amazon has thrown its weight as a retailer behind them. Apple, meanwhile, is no laggard at retail, operating a network of the world’s most profitable stores to sell its own goods, while Google, although the runt of the litter in this regard, has done a solid job of balancing direct sales of its Nexus handsets with carrier and retail sales, work which it could bring to bear effectively on a microconsole offering.
In short, the second wave microconsoles will enjoy all the advantages their predecessors did not. They’ll be backed by significant money, marketing and development effort, and will have a major presence at retail. Moreover, they’ll be “trojan horse” devices in more ways than one, since their primary purpose will be as media devices, streaming content from Amazon, Google Play, iTunes, Hulu, Netflix and so on, while also serving as solid gaming devices in their own right. Here, then, is the convergence that microconsole advocates (and the rather less credible advocates of Smart TV) have been predicting all along; a tiny box that will stream all your media off the network and also build in enough gaming capability to satisfy the mainstream of consumers. Between the microconsole under the TV and the phone in your pocket, that’s gaming all sewn up, they reckon; just as a smartphone camera is good enough for almost everyone, leaving digital SLRs and their ilk to the devoted hobbyist, the professional and the poseur, a microconsole and a smartphone will be more than enough gaming for almost everyone, leaving dedicated consoles and gaming PCs to a commercially irrelevant hardcore fringe.
There are, I think, two problems with that assessment. The first is the notion that the “hardcore fringe” who will use dedicated gaming hardware is small enough to be commercially irrelevant; I’ve pointed out before that the strong growth of a new casual gaming market does not have to come at the cost of growth in the core market, and may even support it by providing a new stream of interested consumers. This is not a zero-sum game, and will not be a zero-sum game until we reach a point where there are no more non-gaming consumers out there to introduce to our medium. Microconsoles might do very well and still cause not the slightest headache to PlayStation, Xbox or Steam.
The second problem with the assessment is a problem with the microconsoles themselves – a problem which the Fire TV suffers from very seriously, and which will likely be replicated by subsequent devices. The problem is control.
Games are an interactive experience. Having a box which can run graphically intensive games is only one side of the equation – it is, arguably, the less important side of the equation. The other side is the controller, the device through which the player interacts with the game world. The most powerful graphics hardware in the world would be meaningless without some enjoyable, comfortable, well-designed method of interaction for players; and out of the box, Fire TV doesn’t have that.
Sure, you can control games (some of them, anyway) with the default remote control, but that’s going to be a terrible experience. I’m reminded of terribly earnest people ten years ago trying to convince me that you could have fun controlling complex games on pre-smartphone phones, or on TV remote controls linked up to cable boxes; valiant efforts ultimately doomed not only by a non-existent business ecosystem but by a terrible, terrible user experience. Smartphones heralded a gaming revolution not just because of the App Store ecosystem, but because it turned out that a sensitive multi-touch screen isn’t a bad way of controlling quite a lot of games. It still doesn’t work for many types of game; a lot of traditional game genres are designed around control mechanisms that simply can’t be shoehorned onto a smartphone. By and large, though, developers have come to grips with the possibilities and limitations of the touchscreen as a controller, and are making some solid, fun experiences with it.
With Fire TV, and I expect with whatever offering Google and Apple end up making, the controller is an afterthought – both figuratively and literally. You have to buy it separately, which keeps down the cost of the basic box but makes it highly unlikely that the average purchaser will be able to have a good game experience on the device. The controller itself doesn’t look great, which doesn’t help much, but simply being bundled with the box would make a bold statement about Fire TV’s gaming ambitions. As it is, this is not a gaming device. It’s a device that can play games if you buy an add-on; the notion that a box is a “gaming device” just because its internal chips can process game software, even if it doesn’t have the external hardware required to adequately control the experience, is the kind of notion only held by people who don’t play or understand games.
This is the Achilles’ Heel of the second generation of microconsoles. They offer a great deal – the backing of the tech giants, potentially huge investment and enormous retail presence. They could, with the right wind in their sales, help to bring “sofa gaming” to the same immense, casual audience that presently enjoys “pocket gaming”. Yet the giant unsolved question remains; how will these games be controlled? A Fire TV owner, a potential casual gamer, who tries to play a game using his remote control and finds the experience frustrating and unpleasant won’t go off and buy a controller to make things better; he’ll shrug and return to the Hulu app, dismissing the Games panel of the device as being a pointless irrelevance.
The answer doesn’t have to be “bundle a joypad”. Perhaps it’ll be “tether to a smartphone”, a decision which would demand a whole new approach to interaction design (which would be rather exciting, actually). Perhaps a simple Wiimote style wand could double as a remote control and a great motion controller or pointer. Perhaps (though I acknowledge this as deeply unlikely) a motion sensor like a “Kinect Lite” could be the solution. Many compelling approaches exist which deserve to be tried out; but one thing is absolutely certain. While the second generation of microconsoles are going to do very well in sales terms, they will primarily be bought as media streaming boxes – and will never be an important games platform until the question of control gets a good answer.
For a trial that centers on smartphones and the technology they use, it’s more than a little ironic. The entire case might not even be taking place if the market wasn’t so big and important, but the constant need for connectivity of everyone is causing problems in the court, hence the new sign.
The problems have centered on the system that displays the court reporter’s real-time transcription onto monitors on the desks of Judge Lucy Koh, the presiding judge in the case, and the lawyers of Apple and Samsung. The system, it seems, is connected via Wi-Fi and that connection keeps failing.
“We have a problem,” Judge Koh told the courtroom on April 4, soon after the problem first appeared. Without the system, Koh said she couldn’t do her job, so if people didn’t shut off electronics, she might have to ban them from the courtroom.
In many other courts, electronic devices are routinely banned, but the Northern District of California and Judge Koh have embraced technology more than most. While reporters and spectators are limited to a pen and paper in courts across the country, the court here permits live coverage through laptops and even provides a free Wi-Fi network.
On Monday, the problems continued and Judge Koh again asked for all cellphones to be switched off.
But not everyone listened. A scan of the courtroom revealed at least one hotspot hadn’t been switched off: It was an SK Telecom roaming device from South Korea, likely used by a member of Samsung’s team.
The hotspot was switched off by the end of the day, but on Tuesday there were more problems.
“You. Ma’am. You in the front row,” Judge Koh said sternly during a break. She’d spotted an Apple staffer using her phone and made the culprit stand, give her name and verbally agree not to use the handset again in court.
As a result of all the problems, lawyers for Apple and Samsung jointly suggested using a scheduled two-day break in the case to hardwire the transcription computers to the court’s network.
The cable wasn’t installed.
“I believe there were some issues, We’re attempting to install it,” one of the attorneys told IDG News Service during the court lunch break.
So for now, the problems continue.
The clerk opened the day with an appeal to switch phones off, “not even airplane mode.”
That still didn’t help.
The transcription screens failed at 9:09 a.m., just minutes into the first session of the morning.