Oracle issued a comprehensive list of its software that may or may not be impacted by the OpenSSL (secure sockets layer) vulnerability known as Heartbleed, while warning that no fixes are yet available for some likely affected products.
The list includes well over 100 products that appear to be in the clear, either because they never used the version of OpenSSL reported to be vulnerable to Heartbleed, or because they don’t use OpenSSL at all.
However, Oracle is still investigating whether another roughly 20 products, including MySQL Connector/C++, Oracle SOA Suite and Nimbula Director, are vulnerable.
Oracle determined that seven products are vulnerable and is offering fixes. These include Communications Operation Monitor, MySQL Enterprise Monitor, MySQL Enterprise Server 5.6, Oracle Communications Session Monitor, Oracle Linux 6, Oracle Mobile Security Suite and some Solaris 11.2 implementations.
Another 14 products are likely to be vulnerable, but Oracle doesn’t have fixes for them yet, according to the post. These include BlueKai, Java ME and MySQL Workbench.
Users of Oracle’s growing family of cloud services may also be able to breath easy. “It appears that both externally and internally (private) accessible applications hosted in Oracle Cloud Data Centers are currently not at risk from this vulnerability,” although Oracle continues to investigate, according to the post.
Heartbleed, which was revealed by researchers last week, can allow attackers who exploit it to steal information on systems thought to be protected by OpenSSL encryption. A fix for the vulnerable version of OpenSSL has been released and vendors and IT organizations are scrambling to patch their products and systems.
Observers consider Heartbleed one of the most serious Internet security vulnerabilities in recent times.
Meanwhile, this week Oracle also shipped 104 patches as part of its regular quarterly release.
The patch batch includes security fixes for Oracle database 11g and 12c, Fusion Middleware 11g and 12c, Fusion Applications, WebLogic Server and dozens of other products. Some 37 patches target Java SE alone.
A detailed rundown of the vulnerabilities’ relative severity has been posted to an official Oracle blog.
“I think you’ll see wide-area, high-bandwidth [smart]watches this year at some point,” said Glenn Lurie, president of emerging devices at AT&T, in an interview.
The company has a group working in Austin, Texas, on thousands of wearable-device prototypes, and is also looking at certifying third-party devices for use on its network, Lurie said.
“A majority of stuff you’re going to see today that’s truly wearable is going to be in a watch form factor to start,” Lurie said. If smartwatch use takes off — “and we believe it can,” Lurie said — then those devices could become hubs for wearable computing.
Right now smartwatches lack LTE capabilities, so they are largely reliant on smartphones for apps and notifications. With a mobile broadband connection, a smartwatch becomes an “independent device,” Lurie said.
“We’ve been very, very clear in our opinion that a wearable needs to be a stand-alone device,” Lurie said.
AT&T and Filip Technologies in January released the Filip child tracker wristwatch, which also allows a parent to call a child over AT&T’s network. Filip could be improved, but those are the kind of wearable products that AT&T wants to bring to market.
Wearables for home health care are also candidates for LTE connections, Lurie said, but fitness trackers may be too small for LTE connectivity, at least for now.
Lurie couldn’t say when smartglasses would be certified to work on AT&T’s network. Google last year said adding cellular capabilities to its Glass eyewear wasn’t in the plans because of battery use. But AT&T is willing to experiment with devices to see where LTE would fit.
“It’s one thing if I’m buying it to go out for a job, it’s another thing if I’m going to wear it everyday. Those are the things people are debating right now — how that’s all going to come out,” Lurie said. “There’s technology and there’s innovation happening, and those things will get solved.”
Lurie said battery issues are being resolved, but there are no network capacity issues. Wearable devices don’t use too much bandwidth as they relay short bursts of information, unless someone is, for instance, listening to Pandora radio on a smartwatch, Lurie said.
But AT&T is building out network capacity, adding Wi-Fi networks, and virtualizing networks to accommodate more devices.
“We don’t have network issues, we don’t have any capacity issues,” Lurie said. “The key element to adding these devices is a majority of [them] aren’t high-bandwidth devices.”
AT&T wants to make wearables work with its home offerings like the Digital Life home automation and security system. AT&T is also working with car makers for LTE integration, with wearables interacting with vehicles to open doors and start ignitions.
MediaTek has shown off one of its most interesting SoC designs to date at the China Electronic Information Expo. The MT6595 was announced a while ago, but this is apparently the first time MediaTek showcased it in action.
It is a big.LITTLE octa-core with integrated LTE support. It has four Cortex A17 cores backed by four Cortex A7 cores and it can hit 2.2GHz. The GPU of choice is the PowerVR G6200. It supports 2K4K video playback and recording, as well as H.265. It can deal with a 20-megapixel camera, too.
The really interesting bit is the modem. It can handle TD-LTE/FDD-LTE/WCDMA/TD-SCDMA/GSM networks, hence the company claims it is the first octa-core with on board LTE. Qualcomm has already announced an LTE-enabled octa-core, but it won’t be ready anytime soon. The MT6595 will – it is expected to show up in actual devices very soon.
Of course, MediaTek is going after a different market. Qualcomm is building the meanest possible chip with four 64-bit Cortex A57 cores and four A53 cores, while MediaTek is keeping the MT6595 somewhat simpler, with smaller 32-bit cores.
The revisions more explicitly spell out the manner in which Google software scans users’ emails, both when messages are stored on Google’s servers and when they are in transit, a controversial practice that has been at the heart of litigation.
Last month, a U.S. judge decided not to combine several lawsuits that accused Google of violating the privacy rights of hundreds of millions of email users into a single class action.
Users of Google’s Gmail email service have accused the company of violating federal and state privacy and wiretapping laws by scanning their messages so it could compile secret profiles and target advertising. Google has argued that users implicitly consented to its activity, recognizing it as part of the email delivery process.
Google spokesman Matt Kallman said in a statement that the changes “will give people even greater clarity and are based on feedback we’ve received over the last few months.”
Google’s updated terms of service added a paragraph stating that “our automated systems analyze your content (including emails) to provide you personally relevant product features, such as customized search results, tailored advertising, and spam and malware detection. This analysis occurs as the content is sent, received, and when it is stored.
Mark Karpeles, the founder of Mt. Gox, has refused to come to the United States to answer questions about the Japanese bitcoin exchange’s U.S. bankruptcy case, Mt. Gox lawyers told a federal judge on Monday.
In the court filing, Mt. Gox lawyers cited a subpoena from the U.S. Department of Treasury’s Financial Crimes Enforcement Network, which has closely monitored virtualcurrencies like bitcoin.
“Mr. Karpeles is now in the process of obtaining counsel to represent him with respect to the FinCEN Subpoena. Until such time as counsel is retained and has an opportunity to ‘get up to speed’ and advise Mr. Karpeles, he is not willing to travel to the U.S.”, the filing said.
The subpoena requires Karpeles to appear and provide testimony in Washington, D.C., on Friday.
The court papers also said a Japanese court had been informed of the issue and that a hearing was scheduled on Tuesday in Japan.
Bitcoin is a digital currency that, unlike conventional money, is bought and sold on a peer-to-peer network independent of central control. Its value has soared in the last year, and the total worth of bit coins minted is now about $7 billion.
Mt. Gox, once the world’s biggest bitcoin exchange, filed for bankruptcy protection in Japan last month, saying it may have lost nearly half a billion dollars worth of the virtual coins due to hacking into its computer system.
According to Monday’s court filings, the subpoena did not specify topics for discussion.
In the court filings, Karpelès’ lawyers asked the court to delay the bankruptcy deposition to May 5, 2014 but said that Mt. Gox could not guarantee that Karpeles would attend that either.
Researchers last week warned they uncovered Heartbleed, a bug that targets the OpenSSL software commonly used to keep data secure, potentially allowing hackers to steal massive troves of information without leaving a trace.
Security experts initially told companies to focus on securing vulnerable websites, but have since warned about threats to technology used in data centers and on mobile devices running Google Inc’s Android software and Apple Inc’s iOS software.
Scott Totzke, BlackBerry senior vice president, told Reuters on Sunday that while the bulk of BlackBerry products do not use the vulnerable software, the company does need to update two widely used products: Secure Work Space corporate email and BBM messaging program for Android and iOS.
He said they are vulnerable to attacks by hackers if they gain access to those apps through either WiFi connections or carrier networks.
Still, he said, “The level of risk here is extremely small,” because BlackBerry’s security technology would make it difficult for a hacker to succeed in gaining data through an attack.
“It’s a very complex attack that has to be timed in a very small window,” he said, adding that it was safe to continue using those apps before an update is issued.
Google spokesman Christopher Katsaros declined comment. Officials with Apple could not be reached.
Security experts say that other mobile apps are also likely vulnerable because they use OpenSSL code.
Michael Shaulov, chief executive of Lacoon Mobile Security, said he suspects that apps that compete with BlackBerry in an area known as mobile device management are also susceptible to attack because they, too, typically use OpenSSL code.
He said mobile app developers have time to figure out which products are vulnerable and fix them.
“It will take the hackers a couple of weeks or even a month to move from ‘proof of concept’ to being able to exploit devices,” said Shaulov.
Technology firms and the U.S. government are taking the threat extremely seriously. Federal officials warned banks and other businesses on Friday to be on alert for hackers seeking to steal data exposed by the Heartbleed bug.
Companies including Cisco Systems Inc, Hewlett-Packard Co, International Business Machines Corp, Intel Corp, Juniper Networks Inc, Oracle Corp Red Hat Inc have warned customers they may be at risk. Some updates are out, while others, like BlackBerry, are rushing to get them ready.
With Amazon’s Fire TV device the first out the door, the second wave of microconsoles has just kicked off. Amazon’s device will be joined in reasonably short order by one from Google, with an app-capable update of the Apple TV device also likely in the works. Who else will join the party is unclear; Sony’s Vita TV, quietly soft-launched in Japan last year, remains a potentially fascinating contender if it had the right messaging and services behind it, but for now it’s out of the race. One thing seems certain, though; at least this time we’re actually going to have a party.
“Second wave”, you see, rather implies the existence of a first wave of microconsoles, but last time out the party was disappointing, to say the least. In fact, if you missed the first wave, don’t feel too bad; you’re in good company. Despite enthusiasm, Kickstarter dollars and lofty predictions, the first wave of microconsole devices tanked. Ouya, Gamestick and their ilk just turned out to be something few people actually wanted or needed. Somewhat dodgy controllers and weak selections of a sub-set of Android’s game library merely compounded the basic problem – they weren’t sufficiently cheap or appealing compared to the consoles reaching their end-of-life and armed with a vast back catalogue of excellent, cheap AAA software.
“The second wave microconsoles will enjoy all the advantages their predecessors did not. They’ll be backed by significant money, marketing and development effort, and will have a major presence at retail”
That was always the reality which deflated the most puffed-up “microconsoles will kill consoles” argument; the last wave of microconsoles sucked compared to consoles, not just for the core AAA gamer but for just about everyone else as well. Their hardware was poor, their controllers uncomfortable, their software libraries anaemic and their much-vaunted cost savings resulting from mobile game pricing rather than console game pricing tended to ignore the actual behaviour of non-core console gamers – who rarely buy day-one software and as a result get remarkably good value for money from their console gaming experiences. Comparing mobile game pricing or F2P models to $60 console games is a pretty dishonest exercise if you know perfectly well that most of the consumers you’re targeting wouldn’t dream of spending $60 on a console game, and never have to.
Why is the second wave of microconsoles going to be different? Three words: Amazon, Google, Apple. Perhaps Sony; perhaps even Samsung or Microsoft, if the wind blows the right direction for those firms (a Samsung microconsole, sold separately and also bundled into the firm’s TVs, as Sony will probably do with Vita TV in future Bravia televisions, would make particular sense). Every major player in the tech industry has a keen interest in controlling the channel through which media is consumed in the living room. Just as Sony and Microsoft originally entered the games business with a “trojan horse” strategy for controlling living rooms, Amazon and Google now recognise games as being a useful way to pursue the same objective. Thus, unlike the plucky but poorly conceived efforts of the small companies who launched the first wave of microconsoles, the second wave is backed by the most powerful tech giants in the world, whose titanic struggle with each other for control of the means of media distribution means their devices will have enormous backing.
To that end, Amazon has created its own game studios, focusing their efforts on the elusive mid-range between casual mobile games and core console games. Other microconsole vendors may take a different approach, creating schemes to appeal to third-party developers rather than building in-house studios (Apple, at least, is almost guaranteed to go down this path; Google could yet surprise us by pursuing in-house development for key exclusive titles). Either way, the investment in software will come. The second wave of microconsoles will not be “boxes that let you play phone games on your TV”; at least not entirely. Rather, they will enjoy dedicated software support from companies who understand that a hit exclusive game would be a powerful way to drive installed base and usage.
Moreover, this wave of microconsoles will enjoy significant retail support. Fire TV’s edge is obvious; Amazon is the world’s largest and most successful online retailer, and it will give Fire TV prime billing on its various sites. The power of being promoted strongly by Amazon is not to be underestimated. Kindle Fire devices may still be eclipsed by the astonishing strength of the iPad in the tablet market, but they’re effectively the only non-iPad devices in the running, in sales terms, largely because Amazon has thrown its weight as a retailer behind them. Apple, meanwhile, is no laggard at retail, operating a network of the world’s most profitable stores to sell its own goods, while Google, although the runt of the litter in this regard, has done a solid job of balancing direct sales of its Nexus handsets with carrier and retail sales, work which it could bring to bear effectively on a microconsole offering.
In short, the second wave microconsoles will enjoy all the advantages their predecessors did not. They’ll be backed by significant money, marketing and development effort, and will have a major presence at retail. Moreover, they’ll be “trojan horse” devices in more ways than one, since their primary purpose will be as media devices, streaming content from Amazon, Google Play, iTunes, Hulu, Netflix and so on, while also serving as solid gaming devices in their own right. Here, then, is the convergence that microconsole advocates (and the rather less credible advocates of Smart TV) have been predicting all along; a tiny box that will stream all your media off the network and also build in enough gaming capability to satisfy the mainstream of consumers. Between the microconsole under the TV and the phone in your pocket, that’s gaming all sewn up, they reckon; just as a smartphone camera is good enough for almost everyone, leaving digital SLRs and their ilk to the devoted hobbyist, the professional and the poseur, a microconsole and a smartphone will be more than enough gaming for almost everyone, leaving dedicated consoles and gaming PCs to a commercially irrelevant hardcore fringe.
There are, I think, two problems with that assessment. The first is the notion that the “hardcore fringe” who will use dedicated gaming hardware is small enough to be commercially irrelevant; I’ve pointed out before that the strong growth of a new casual gaming market does not have to come at the cost of growth in the core market, and may even support it by providing a new stream of interested consumers. This is not a zero-sum game, and will not be a zero-sum game until we reach a point where there are no more non-gaming consumers out there to introduce to our medium. Microconsoles might do very well and still cause not the slightest headache to PlayStation, Xbox or Steam.
The second problem with the assessment is a problem with the microconsoles themselves – a problem which the Fire TV suffers from very seriously, and which will likely be replicated by subsequent devices. The problem is control.
Games are an interactive experience. Having a box which can run graphically intensive games is only one side of the equation – it is, arguably, the less important side of the equation. The other side is the controller, the device through which the player interacts with the game world. The most powerful graphics hardware in the world would be meaningless without some enjoyable, comfortable, well-designed method of interaction for players; and out of the box, Fire TV doesn’t have that.
Sure, you can control games (some of them, anyway) with the default remote control, but that’s going to be a terrible experience. I’m reminded of terribly earnest people ten years ago trying to convince me that you could have fun controlling complex games on pre-smartphone phones, or on TV remote controls linked up to cable boxes; valiant efforts ultimately doomed not only by a non-existent business ecosystem but by a terrible, terrible user experience. Smartphones heralded a gaming revolution not just because of the App Store ecosystem, but because it turned out that a sensitive multi-touch screen isn’t a bad way of controlling quite a lot of games. It still doesn’t work for many types of game; a lot of traditional game genres are designed around control mechanisms that simply can’t be shoehorned onto a smartphone. By and large, though, developers have come to grips with the possibilities and limitations of the touchscreen as a controller, and are making some solid, fun experiences with it.
With Fire TV, and I expect with whatever offering Google and Apple end up making, the controller is an afterthought – both figuratively and literally. You have to buy it separately, which keeps down the cost of the basic box but makes it highly unlikely that the average purchaser will be able to have a good game experience on the device. The controller itself doesn’t look great, which doesn’t help much, but simply being bundled with the box would make a bold statement about Fire TV’s gaming ambitions. As it is, this is not a gaming device. It’s a device that can play games if you buy an add-on; the notion that a box is a “gaming device” just because its internal chips can process game software, even if it doesn’t have the external hardware required to adequately control the experience, is the kind of notion only held by people who don’t play or understand games.
This is the Achilles’ Heel of the second generation of microconsoles. They offer a great deal – the backing of the tech giants, potentially huge investment and enormous retail presence. They could, with the right wind in their sales, help to bring “sofa gaming” to the same immense, casual audience that presently enjoys “pocket gaming”. Yet the giant unsolved question remains; how will these games be controlled? A Fire TV owner, a potential casual gamer, who tries to play a game using his remote control and finds the experience frustrating and unpleasant won’t go off and buy a controller to make things better; he’ll shrug and return to the Hulu app, dismissing the Games panel of the device as being a pointless irrelevance.
The answer doesn’t have to be “bundle a joypad”. Perhaps it’ll be “tether to a smartphone”, a decision which would demand a whole new approach to interaction design (which would be rather exciting, actually). Perhaps a simple Wiimote style wand could double as a remote control and a great motion controller or pointer. Perhaps (though I acknowledge this as deeply unlikely) a motion sensor like a “Kinect Lite” could be the solution. Many compelling approaches exist which deserve to be tried out; but one thing is absolutely certain. While the second generation of microconsoles are going to do very well in sales terms, they will primarily be bought as media streaming boxes – and will never be an important games platform until the question of control gets a good answer.
For a trial that centers on smartphones and the technology they use, it’s more than a little ironic. The entire case might not even be taking place if the market wasn’t so big and important, but the constant need for connectivity of everyone is causing problems in the court, hence the new sign.
The problems have centered on the system that displays the court reporter’s real-time transcription onto monitors on the desks of Judge Lucy Koh, the presiding judge in the case, and the lawyers of Apple and Samsung. The system, it seems, is connected via Wi-Fi and that connection keeps failing.
“We have a problem,” Judge Koh told the courtroom on April 4, soon after the problem first appeared. Without the system, Koh said she couldn’t do her job, so if people didn’t shut off electronics, she might have to ban them from the courtroom.
In many other courts, electronic devices are routinely banned, but the Northern District of California and Judge Koh have embraced technology more than most. While reporters and spectators are limited to a pen and paper in courts across the country, the court here permits live coverage through laptops and even provides a free Wi-Fi network.
On Monday, the problems continued and Judge Koh again asked for all cellphones to be switched off.
But not everyone listened. A scan of the courtroom revealed at least one hotspot hadn’t been switched off: It was an SK Telecom roaming device from South Korea, likely used by a member of Samsung’s team.
The hotspot was switched off by the end of the day, but on Tuesday there were more problems.
“You. Ma’am. You in the front row,” Judge Koh said sternly during a break. She’d spotted an Apple staffer using her phone and made the culprit stand, give her name and verbally agree not to use the handset again in court.
As a result of all the problems, lawyers for Apple and Samsung jointly suggested using a scheduled two-day break in the case to hardwire the transcription computers to the court’s network.
The cable wasn’t installed.
“I believe there were some issues, We’re attempting to install it,” one of the attorneys told IDG News Service during the court lunch break.
So for now, the problems continue.
The clerk opened the day with an appeal to switch phones off, “not even airplane mode.”
That still didn’t help.
The transcription screens failed at 9:09 a.m., just minutes into the first session of the morning.
Cisco has been accused of helping the Chinese authorities snoop on, discriminate against and violently suppress the religious group Falun Gong.
The Electronic Frontier Foundation (EFF) has taken Cisco to task about this, and has filed a request to submit an amicus brief in a US District Court in California.
It asks the court to let the case “Doe vs Cisco Systems” go ahead, telling it that the firm has aided China’s human rights abuses.
“China’s record of human rights abuses against the Falun Gong is notorious, including detention, torture, forced conversions, and even deaths. These violations have been well-documented by the UN, the US State Department, and many others around the world, including documentation of China’s use of sophisticated surveillance technologies to facilitate this repression,” it said.
“The central claim in the case is that Cisco purposefully customized its general purpose router technology to allow the Chinese government to identify, track, and detain Falun Gong members.”
The EFF alleges that Cisco was asked to customize its kit so that the Chinese authorities could pick up Falun Gong ‘signatures’ and enable the logging and monitoring of traffic patterns.
Its lawsuit alleges that Cisco knew about this customization, knew that it would be used to repress the Falun Gong, and still marketed and supported the technologies “towards that purpose”.
“In fact, the case arises in part from the publication several years ago of a presentation in which Cisco confirms that the Golden Shield is helpful to the Chinese government to ‘Combat Falun Gong Evil Religion and Other Hostilities’,” adds the EFF.
“It also alleges that these customization’s were actually used to identify and detain the plaintiffs.”
Cisco has declined our request to comment on the views of the EFF and its lawsuit.
Facebook released its second government requests report covering the second half of 2013, and it expands its scope from the first one in two ways. First, it includes requests to restrict or remove users’ content from the site, whereas the first report was limited to requests for account information. And second, the report now includes data on Instagram, the photo sharing site owned by Facebook.
Facebook is not breaking out the number of Instagram requests; they’re included in the overall tallies. But Instagram’s inclusion speaks to the popularity of the service, which Facebook acquired in 2012 but didn’t include in its government requests report for the first half of 2013.
The report includes data on government requests to receive data about Instagram accounts and to restrict access to its content.
Facebook receives requests to restrict or remove content based on countries’ laws over what can be shared online. When the request is legally sound, Facebook restricts access to content in the specific country whose government objected to it. If Facebook also determines that the flagged content violates its own standards, it removes the content globally. Separately, Facebook also receives requests for account information and data, many of which relate to criminal cases such as robberies or kidnappings.
Facebook does not hand over data every time it receives a government request — sometimes the requests are overly broad or vague, or do not comply with legal standards, the company says.
In the U.S., Facebook received about 12,600 law enforcement requests in the second half of 2013, up from the range of 11,000-12,000 it tallied in its first report. For the second half of 2013, Facebook said it produced data for about 81 percent of the requests.
Regarding U.S. government requests about national security matters, Facebook reported it may have received none or as many as 999, saying it couldn’t be more specific due to U.S. legal restrictions.
Governments in other countries across the world are also interested in Facebook users’ data. India ranked second behind the U.S. with about 3,600 requests targeting more than 4,700 accounts. Facebook produced data for roughly half of those requests.
More than 1,900 requests came from the U.K., while the governments of France, Germany and Italy each served Facebook with more than 1,600 data requests.
Besides Facebook, other companies like Yahoo, Google and Microsoft periodically release their own government request reports, as part of an effort to be more transparent to users. The tallies have taken on increased significance following leaks about U.S. government surveillance made by former contractor Edward Snowden.
The Internet retailer would jump into a crowded market dominated by Apple Inc and Samsung Electronics Co Ltd.
The company has recently been demonstrating versions of the handset to developers in San Francisco and Seattle. It intends to announce the device in June and ship to stores around the end of September, the newspaper cited the unidentified sources as saying.
Amazon has made great strides into the hardware arena as it seeks to boost sales of digital content and puts its online store in front of more users. Amazon recently launched its $99 Fire TV video-streaming box and its Kindle e-readers and Fire tablets already command respectable U.S. market share after just a few years on the market.
Rumors of an Amazon-designed smartphone have circulated for years, though executives have previously played down ambitions to leap into a heavily competitive and increasingly saturated market.
Apple and Samsung, which once accounted for the lion’s share of the smartphone market, are struggling to maintain margins as new entrants such as Huawei and Lenovo target the lower-income segment.
To stand out from the crowd, Amazon intends to equip its phones with screens that display three-dimensional images without a need for special glasses, the Journal said.
Amazon officials were not immediately available for comment.
BlackBerry Ltd would think about abandoning its handset business if it remains unprofitable, its chief executive officer said on Wednesday, as the technology company looks to expand its corporate reach with investments, acquisitions and partnerships.
“If I cannot make money on handsets, I will not be in the handset business,” John Chen said in an interview, adding that the time frame for such a decision was short. He would not be more specific, but said it should be possible to make money off shipments of as few as 10 million a year.
At its peak, BlackBerry shipped 52.3 million devices in fiscal 2011, while it recorded revenue on less than 2 million last quarter.
Chen, who took the helm of the struggling company in November, said BlackBerry was also looking to invest in or team up with other companies in regulated industries such as healthcare, and financial and legal services, all of which require highly secure communications.
The chief executive said small acquisitions to strengthen BlackBerry’s network security offerings were also possible.
“We are building an engineering team on the service side that is focused on security. We are building an engineering team on the device side that is focused on security. We will do some partnerships and we will probably, potentially do an M&A on security.”
He said security had become more important to businesses and government since the revelations about U.S. surveillance made by former National Security Agency contractor Edward Snowden.
In a wide-ranging interview in New York, Chen acknowledged past management mistakes and said he had a long-term strategy to complement the short-term goals of staying afloat and stemming customer defections.
“You have to live short term. Maybe the prior management had the luxury to bet the world would come to it. I don’t have the luxury at all. I’m losing money and burning cash.”
In March, the embattled smartphone maker reported a quarterly net loss of $423 million and a 64 percent drop in its revenues, underscoring the magnitude of the challenge Chen faces in turning around the company.
Chen said BlackBerry remained on track to be cash-flow positive by the end of the current fiscal year, which runs to the end of February 2015, and to return to profit some time in the fiscal year after that.
Chen said his long-term plans for BlackBerry included competing in the burgeoning business of connecting all manner of devices, from kitchen appliances to automotive consoles to smartphones.
Chen said he was not sure how long it would take for the “machine-to-machine” or “M2M” world to become a mainstream business, but he said he was sure that was coming.
“We are not only interested in managing BlackBerry devices. We are interested in managing all devices that you would like to speak to each other,” he said. “To achieve our dream of being a major player in M2M requires more partnerships with others,” including telecom companies eager to participate.
Based on the firm’s Kabini system on chip (SoC), the APU is named the “AM1 Platform”, combining most system functions into one chip, with the motherboard and APU together costing around between $39 and $59.
Launched at the beginning of March and released today in North America, AMD’s AM1 Platform is aimed at markets where entry-level PCs are competing against other low-cost devices.
“We’re seeing that the market for these lower-cost PCs is increasing,” said AMD desktop product marketing manager Adam Kozak. “We’re also seeing other devices out there trying to fill that gap, but there’s really a big difference between what these devices can do versus what a Windows PC can do.”
The AM1 Platform combines an Athlon or Sempron processor with a motherboard based on the FS1b upgradable socket design. These motherboards have no chipset, as all functions are integrated into the APU, and only require additional memory modules to make a working system.
The AM1 SoC has up to four Jaguar CPU cores and an AMD Graphics Core Next (GCN) GPU, an on-chip memory controller supporting up to 16GB of DDR3-1600 RAM, plus all the typical system input and output functions, including SATA ports for storage, USB 2.0 and USB 3.0 ports, as well as VGA and HDMI graphics outputs.
AMD’s Jaguar core is best known for powering both Microsoft’s Xbox One and Sony’s Playstation 4 (PS4) games consoles. The AM1 Platform supports Windows XP, Windows 7 and Windows 8.1 in 32-bit or 64-bit architectures.
AMD said that it is going after Intel’s Bay Trail with the AM1 Platform, and expects to see it in small form factor desktop PCs such as netbooks and media-streaming boxes.
“We see it being used for basic computing, some light productivity and basic gaming, and really going after the Windows 8.1 environment with its four cores, which we’ll be able to offer for less,” Kozak added.
AMD benchmarked the AM1 Platform against an Intel Pentium J2850 with PC Mark 8 v2 and claimed it produced double the performance of the Intel processor. See the table below.
The FS1b upgradable socket means that users will be able to upgrade the system at a later date, while in Bay Trail and other low-cost platforms the processor is mounted directly to the motherboard.
AMD lifted the lid on its Kabini APU for tablets and mainstream laptops last May. AMD’s A series branded Kabini chips are quad-core processors, with the 15W A4-5000 and 25W A6-5200 clocked at 1.5GHz and 2GHz, respectively.
Dubbed Heartbleed, the bug was discoverd in a software library used in servers, operating systems and email and instant messaging systems and allows anyone to read the memory of systems using vulnerable versions of OpenSSL software.
OpenSSL is an open source implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols by which email, instant messaging, and some VPNs are kept secure.
The vulnerability is called Heartbleed because it’s in the OpenSSL implementation of the TLS/DTLS heartbeat extension described in RFC6520, and when it is exploited it can lead to leaks of memory contents from the server to the client and from the client to the server.
The researchers from defense security firm Codenomicon said that attackers could take advantage of the bug to eavesdrop on communications, steal data directly from server or client systems, and impersonate users and servers.
“This compromises the secret keys used to identify service providers and to encrypt the traffic, the names and passwords of the users and the actual content,” the researchers wrote on a website dedicated to the bug.
“Without using any privileged information or credentials, we were able to steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication.”
Because such attacks are not traceable, it’s not clear how widespread the bug is or was, but it is thought that at least two-thirds of websites could be affected, as the most notable software using OpenSSL are the open source webservers Apache and nginx.
The researchers pointed out that the combined market share of those two webservers was over 66 percent of the active websites on the internet, according to Netcraft’s Web Server Survey released this month.
“You are likely to be affected either directly or indirectly. OpenSSL is the most popular open source cryptographic library and TLS implementation used to encrypt traffic on the Internet,” the researchers added.
“Your popular social site, your company’s site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL. Furthermore you might have client side software on your computer that could expose the data from your computer if you connect to compromised services.”
Although an updated version of OpenSSL has been released to patch this security vulnerability, it might take time before some operating system developers and software distributions deploy it.
“Recovery from this leak requires patching the vulnerability, revocation of the compromised keys and reissuing and redistributing new keys,” the researchers said. “Even doing all this will still leave any traffic intercepted by the attacker in the past vulnerable to decryption.”
A U.S. court has ruled that the Federal Trade Commission can proceed with a lawsuit against hotel group Wyndham Worldwide Corp for allegedly failing to properly secure consumers’ personal information.
Wyndham had argued that the commission did not have jurisdiction to sue over what it saw as lax security leading to data breaches, It had asked for the lawsuit to be dismissed.
Judge Esther Salas, of the U.S. District Court for the District of New Jersey, disagreed and ruled that the FTC should be allowed to proceed with its case.
Wyndham said in a statement that it planned to continue its fight.
“We continue to believe the FTC lacks the authority to pursue this type of case against American businesses, and has failed to publish any regulations that would give such businesses fair notice of any proposed standards for data security,” the company said. “We intend to defend our position vigorously.”
The FTC has accused Wyndham of failing to provide adequate security for its computer system, leading to three data breaches between April 2008 and January 2010. It says the breaches led to fraud worth $10.6 million.
FTC Chairwoman Edith Ramirez said she was “pleased that the court has recognized the FTC’s authority to hold companies accountable for safeguarding consumer data.
“We look forward to trying this case on the merits,” she said.
Wyndham operates several hotel brands, including the value-oriented Days Inn and Super 8. It is one of many organizations to acknowledge in recent years that it had been hacked by people seeking either financial gain or intellectual property.