Japanese electronics giant Panasonic Corp said it is gearing up to spend 1 trillion yen ($8.4 billion) on acquisitions over the next four years, bolstered by a stronger profit outlook for its automotive and housing technology businesses.
Chief Executive Kazuhiro Tsuga said at a briefing on Thursday that Panasonic doesn’t have specific acquisition targets in mind for now. But he said the firm will spend around 200 billion yen on M&A in the fiscal year that kicks off in April alone, and pledged to improve on Panasonic’s patchy track record on big deals.
“With strategic investments, if there’s an opportunity to accelerate growth, you need funds. That’s the idea behind the 1 trillion yen figure,” he said. Tsuga has spearheaded a radical restructuring at the Osaka-based company that has made it one of the strongest turnaround stories in Japan’s embattled technology sector.
Tsuga previously told Reuters that company was interested in M&A deals in the European white goods market, a sector where Panasonic has comparatively low brand recognition.
The firm said on Thursday it’s targeting operating profit of 430 billion yen in the next fiscal year, up nearly 25 percent from the 350 billion yen it expects for the year ending March 31.
Panasonic’s earnings have been bolstered by moving faster than peers like Sony Corp and Sharp Corp to overhaul business models squeezed by competition from cheaper Asian rivals and caught flat-footed in a smartphone race led by Apple Inc and Samsung Electronics. Out has gone reliance on mass consumer goods like TVs and smartphones, and in has come a focus on areas like automotive technology and energy-efficient home appliances.
Tsuga also sought to ease concerns that an expensive acquisition could set back its finances, which took years to recover from the deal agreed in 2008 to buy cross-town rival Sanyo for a sum equal to about $9 billion at the time.
Oracle and Intel have teamed up for the first demonstration of carrier-grade network function virtualization (NFV), which will allow communication service providers to use a virtualized, software-defined model without degradation of service or reliability.
The Oracle-led project uses the Intel Open Network Platform (ONP) to create a robust service over NFV, using intelligent direction of software to create viable software-defined networking that replaces the clunky equipment still prevalent in even the most modern networks.
Barry Hill, Oracle’s global head of NFV, told The INQUIRER: “It gets us over one of those really big hurdles that the industry is desperately trying to overcome: ‘Why the heck have we been using this very tightly coupled hardware and software in the past if you can run the same thing on standard, generic, everyday hardware?’. The answer is, we’re not sure you can.
“What you’ve got to do is be smart about applying the right type and the right sort of capacity, which is different for each function in the chain that makes up a service.
“That’s about being intelligent with what you do, instead of making some broad statement about generic vanilla infrastructures plugged together. That’s just not going to work.”
Oracle’s answer is to use its Communications Network Service Orchestration Solution to control the OpenStack system and shrink and grow networks according to customer needs.
Use cases could be scaling out a carrier network for a rock festival, or transferring network priority to a disaster recovery site.
“Once you understand the extent of what we’ve actually done here, you start to realize just how big an announcement this is,” said Hill.
“On the fly, you’re suddenly able to make these custom network requirements instantly, just using off-the-shelf technology.”
The demonstration configuration optimizes the performance of an Intel Xeon E5-2600 v3 processor designed specifically for networking, and shows for the first time a software-defined solution which is comparable to the hardware-defined systems currently in use.
In other words, it can orchestrate services from the management and orchestration level right down to a single core of a single processor, and then hyperscale it using resource pools to mimic the specialized characteristics of a network appliance, such as a large memory page.
“It’s kind of like the effect that mobile had on fixed line networks back in the mid-nineties where the whole industry was disrupted by who was providing the technology, and what they were providing,” said Hill.
“Suddenly you went from 15-year business plans to five-year business plans. The impact of virtualization will have the same level of seismic change on the industry.”
Today’s announcement is fundamentally a proof-of-concept, but the technology that powers this kind of next-generation network is already evolving its way into networks.
Hill explained that carrier demand had led to the innovation. “The telecoms industry had a massive infrastructure that works at a very slow pace, at least in the past,” he said.
“However, this whole virtualization push has really been about the carriers, not the vendors, getting together and saying: ‘We need a different model’. So it’s actually quite advanced already.”
NFV appears to be the next gold rush area for enterprises, and other consortium are expected to make announcements about their own solutions within days.
The Oracle/Intel system is based around OpenStack, and the company is confident that it will be highly compatible with other systems.
The ‘Oracle Communications Network Service Orchestration Solution with Enhanced Platform Awareness using the Intel Open Network Platform’ – or OCNSOSWEPAUTIONP as we like to think of it – is currently on display at Oracle’s Industry Connect event in Washington DC.
The INQUIRER wonders whether there is any way the marketing department can come up with something a bit more catchy than OCNSOSWEPAUTIONP before it goes on open sale.
MSI recently announced a 970A SLI Krait motherboard that will support the AMD processors and the USB 3.1 protocol. Motherboards with USB 3.1 ports have also been released by Gigabyte, ASRock and Asus, but those boards support Intel chips.
USB 3.1 can shuffle data between a host device and peripheral at 10Gbps, which is two times faster than USB 3.0. USB 3.1 is also generating excitement for the reversible Type-C cable, which is the same on both ends so users don’t have to worry about plug orientation.
The motherboards with USB 3.1 technology are targeted at high-end desktops. Some enthusiasts like gamers seek the latest and greatest technologies and build desktops with motherboards sold by MSI, Asus and Gigabyte. Many of the new desktop motherboards announced have the Type-C port interface, which is also in recently announced laptops from Apple and Google.
New technologies like USB 3.1 usually first appear in high-end laptops and desktops, then make their way down to low-priced PCs, said Dean McCarron, principal analyst of Mercury Research.
PC makers are expected to start putting USB 3.1 ports in more laptops and desktops starting later this year.
At the WinHEC conference Microsoft revealed that Windows 10 will support 8K (7680*4320) resolution for monitors, which is unlikely show up on the market this year or next.
It also showed off minimum and maximum resolutions supported by its upcoming Windows 10. It looks like the new operating system will support 6″+ phone and tablet screens with up to 4K (3840*2160) resolution, 8″+ PC displays with up to 4K resolution and 27″+ monitors with 8K (7680*4320) resolution.
To put this in some perspective, the boffins at the NHK (Nippon H?s? Ky?kai, Japan Broadcasting Corp.) think that 8K ultra-high-definition television format will be the last 2D format as the 7680*4320 resolution (and similar resolution) is the highest 2D resolution that the human eye can process.
This means that 8K and similar resolutions will stay around for a long time and it makes sense to add their support to hardware and software.
NHK is already testing broadcasting in 8K ultra-high-definition resolutions, VESA has ratified DisplayPort and embedded DisplayPort standards to connect monitors with up to 8K resolution to graphics adapters and a number of upcoming games will be equipped for textures for 8K UHD displays.
However monitors that support 8K will not be around for some time because display makers will have to produce new types of panels for them.
Redmond will be ready for the advanced UHD monitors well before they hit the market. Many have criticized Microsoft for poor support of 4K UHD resolutions in Windows 8.
Buried in AMD’s shareholders’ report, there was a some suprising detail about the outfit’s first ARM 64-bit server SoCs.
For those who came in late, they are supposed to be going on sale in the first half of 2015.
We know that the ARM Cortex-A57 architecture based SoC has been codenamed ‘Hierofalcon.’
AMD started sampling these Embedded R-series chips last year and is aiming to release the chipset in the first half of this year for embedded data center applications, communications infrastructure, and industrial solutions.
But it looks like the Hierofalcon SoC will include eight Cortex-A57 cores with 4MB L2 cache and will be manufactured on a 28nm process. It will support two 64-bit DDR3/4 memory channels with ECC up to 1866MHz and up to 128GB per CPU. Connectivity options will include two 10GbE KR, 8x SATA 3 6Gb/s, 8 lanes PCIe Gen 3, SPI, UART, and I2C interfaces. The chip will have a TDP between 15 to 30W.
The SOC ranges between a TDP of 15 – 30 W. The highly integrated SoC includes 10 Gb KR Ethernet and PCI-Express Gen 3 for high-speed network connectivity, making it ideal for control plane applications. The chip also features a dedicated security processor which enables AMD’s TrustZone technology for enhanced security. There’s also a dedicated cryptographic security co-processor on-board, aligning to the increased need for networked, secure systems.
Soon after Hierofalcon is out, AMD will be launching the SkyBridge platform that will feature interchangeable 64-bit ARM and x86 processors. Later in 2016, the company will be launching the K12 chip, its custom high performance 64-bit ARM core.
By making Parse available for IoT, Facebook hopes to strengthen its ties to a wider group of developers in a growing industry via three new software development kits aimed specifically at IoT, unveiled Wednesday at the company’s F8 developer conference in San Francisco.
The tools are aimed at making it easier for outside developers to build apps that interface with Internet-connected devices. Garage door manufacturer Chamberlain, for example, uses Parse for its app to let people open and lock their garage door from their smartphones.
Or, hypothetically, the maker of a smart gardening device could use Parse to incorporate notifications into their app to remind the user to water their plants, said Ilya Sukhar, CEO of Parse, during a keynote talk at F8.
Facebook bought Parse in 2013, putting itself in the business of selling application development tools. Parse provides a hosted back-end infrastructure to help third party developers build their apps. Over 400,000 developers have built apps with Parse, Sukhar said on Wednesday.
Parse’s new SDKs are available on GitHub as well as on Parse’s site.
The vague announcement raised the question of whether Verizon is simply trying to show its competitive value against Google and AT&T, which have both announced fiber Internet services in a number of cities.
“I think Verizon is trying to play catch up to the others without saying it that way,” said independent analyst Jeff Kagan. “The only question I still have is will Verizon be a real competitor or is this mostly just talk to cover their butts in the rapidly changing marketplace?”
What Verizon did disclose in a news release was that it will be modernizing undisclosed portions of its so-called 100G (for 100 Gbps) metro optical network using packet-optimized networking gear from Ciena and Cisco. Testing and deployment of the Ciena 6500 optical switch and Cisco’s Network Covergence System will happen this year, with plans to go live in 2016. /
“We are not announcing specific geographies at this time,” Verizon spokeswoman Lynn Staggs said in an email. She said the new equipment is not directly related to fiber connections to the premises of homes or businesses. By comparison, both Google Fiber and AT&T GigaPower are designed with 1 Gbps connections to homes, schools and businesses in mind.
Staggs said Verizon is upgrading connectivity between central Verizon offices and the backbone network. On top of that service, there is generally an “access” network for the last mile to connect the customer and the metro network, she added.
No matter how Verizon describes the ultimate purpose of its metro network, it is clear to analysts and others that Verizon’s metro upgrades could be used to prepare for last-mile fiber connections to businesses, schools and even homes to take on Google and AT&T directly. “Deploying a new coherent, optimized and highly scalable metro network means Verizon stays ahead of the growth trajectory while providing an even more robust network infrastructure for future demand,” said Lee Hicks, vice president of Verizon network planning, in a statement.
The service, dubbed Pony Express, would ask users to provide personal information, including credit card and Social Security numbers, to a third-party company that would verify their identity, according to a Re/code report on Tuesday.
Google also would work with vendors that distribute bills on behalf of service providers like insurance companies, telecom carriers and utilities, according to the article, which was based on a document seen by Re/code that describes the service.
It’s not clear whether Pony Express is the actual name of the service or if Google will change the name once it launches. It’s planned to launch by the end of the year, according to the report.
A Google spokeswoman declined to comment.
A handful of vendors such as Intuit, Invoicera and BillGrid already offer e-billing payment and invoicing software. Still, a Google service, especially one within Gmail, could be useful and convenient to consumers if the company is able to simplify the online payment process.
A benefit for Google could be access to valuable data about people’s e-commerce activities, although there would be privacy issues to sort out. Google already indexes people’s Gmail messages for advertising purposes.
Plus, the service could give Google an entry point into other areas of payment services. The company has already launched a car insurance shopping servicefor California residents, which it plans to expand to other states.
It’s unclear who Google’s partners would be for the service, but screen shots published by Re/Code show Cascadia Financial, a financial planning company, and food delivery service GreatFoods.
Azul specializes in bespoke open source Java runtimes and has announced that it is expanding into embedded product lines.
Scott Sellers, CEO and co-founder, and Howard Green, VP of marketing, were keen to extol the virtues of an embedded system.
“If you go with an Oracle system, not only do you have to pay a license fee but you are restricted to off-the-peg solutions,” explains Sellers.
“Because we are an open source solution we can create exactly what the customer needs, then feed that expertise back into the community where it will eventually end up in the official builds of Java.”
Oracle now bases its products around the open source community before releasing its own stable, closed source editions, so Zulu Embedded will often contain cutting edge functionality which is not available to standard (and paying) Java users.
“Our products are built out of a customer need. It’s not just about cost, but about finding new ways to use the Java runtime, which is still the most popular programming language in the world, and creating ways of getting it to do new things,” says Green.
The arrival of Zulu Embedded will open a whole host of opportunities for Internet of Things (IoT) building, but Sellers is keen for the product to be seen as more than just an IoT platform.
“Of course, by creating customized solutions we are able to strip out the libraries that are unnecessary and make a more nimble runtime with a smaller footprint, which makes it ideal for the IoT, but there is far more to it than that – everything from routers, to set-top boxes to ATMs,” explains Green.
The product officially launches today, but has been subject to a significant amount of testing in the field with selected customers.
“In actual fact, it has been available on a limited basis since last September and there are already over two million units running Zulu Embedded in the field,” says Green.
The product will be monetized by offering enterprise-grade support options to customers, while the product itself is freely available.
“We see the end-of-life schedule of Java SE as a major selling point for our own product,” says Green.
Oracle’s support for Java SE 7 has already expired, and it’s another two years before version 8 also reaches end-of-life. Azul, meanwhile, remains committed to its open source products indefinitely.
“Compared to all the alternatives which are either limited in lifespan or have large upfront licensing costs, we’re sure that, combined with our ongoing support, we’re the right choice for anyone wanting flexible deployment of Java,” says Sellers.
Zulu Embedded works across a huge number of platforms, including Mac, Windows and Linux, on Intel and AMD x64 architectures with ARM compatibility to follow.
It is also compatible with physical servers such as Windows Server, hypervisors including VMware and Hyper-V and cloud solutions like Microsoft Azure, Red Hat, Suse and Docker.
For Java as a language, however, Zulu Embedded is something of a return to its roots.
“Sun Microsystems [the original owners of Java] were very successful in the embedded market and paved the way for the vast number of applications that already have a Java runtime. With the end of support for Java 7, many people will be looking at where to go next,” explains Sellars.
Consumer users of Java have repeatedly lashed out at Oracle for its use of bundleware in Java installations, which recently spread to Mac users.
Zulu is available immediately from the Azul website, along with details on working with the Embedded version.
We’ve come a long way in the past nine years, when Sun and Azul were counter-suing over patents. Today, open source is the beating heart of Java, though many won’t realize it.
HP has announced its first off-the-shelf configured private cloud based on OpenStack and Cloud Foundry.
HP Helion Rack continues the Helion naming convention for HP’s cloud offerings, and will, it is hoped, help enterprise IT departments speed up cloud deployment by offering a solid template system and removing the months of design and build.
Helion Rack is a “complete” private cloud with integrated infrastructure-as-a-service and platform-as-a-service capabilities that mean it should be a breeze to get it working with cloud-dwelling apps.
“Enterprise customers are asking for private clouds that meet their security, reliability and performance requirements, while also providing the openness, flexibility and fast time-to-value they require,” said Bill Hilf, senior vice president of product management for HP Helion.
“HP Helion Rack offers an enterprise-class private cloud solution with integrated application lifecycle management, giving organisations the simplified cloud experience they want, with the control and performance they need.”
HP cites the key features of its product as rapid deployment, simplified management, easy scaling, workload flexibility, faster native-app development and, of course, the open architecture of OpenStack and Cloud Foundry, providing a vast support network for implementation, use cases and customisation.
The product is built on HP ProLiant DL servers, and is assembled by HP and configured with the HP Helion OpenStack and Development Platform. HP and its partners can then work alongside customers to find the best way to exploit the product knowing that it is up and running from day one.
HP Helion Rack will be available in April with prices varying by configuration. Finance is available for larger configurations.
Suse launched its own OpenStack Cloud 5 with Sahara data processing earlier this month, just one of many other implementations of OpenStack designed to help roll out the cloud revolution quickly to enterprises, but offering a complete 360 package is something that HP is pioneering.
Cosmic detectives are investigating a case of mistaken stellar identity: An exploding star that was once thought to be the oldest recorded nova — a nuclear explosion on the surface of a dead star — was more likely caused by the merger of two stars.
In 1670, a bright new star appeared in the constellation Cygnus, the Swan, and stayed there for two years — you can see the location of the new stars in this video. The short-lived star was grouped into the “nova” category, but over the last 30 years, astronomers have been questioning its identity.
A new research paper that examines the chemical makeup of the crime scene may be the final nail in the coffin. The researchers suggest that the so-called nova is instead the oldest example of another type of stellar explosion sometimes called a “red nova” — a somewhat newly-discovered phenomenon that scientists are still working to understand.
In 1670, a new star appeared just above the head of the swan that makes up the constellation Cygnus. Many astronomers took note of this newcomer, so its appearance and life span are well documented. It was dubbed Nova Vul 1670 —at the time, “nova” referred simply to any new star.
In the last 300 years, however, the word “nova” has taken on a much more specific and scientific meaning.
By today’s definition, a classic nova is an explosion that takes place on the surface of a white dwarf — the small, dense, nugget of leftover material from a star that has stopped burning. The white dwarf syphons material away from another nearby star, the pressure builds up on its surface and a nuclear reaction releases an incredible burst of energy. (Unlike Type Ia supernovas, which start in a similar fashion, the white dwarf in a nova is expected to survive through the explosion.)
Many things about CK Vulpeculae’s identity as a nova just don’t line up, said Tomasz Kaminski, a postdoctoral fellow at the European Southern Observatory.
For example, novas tend to burn in the sky for days — not years, as CK Vulpeculae did. Plus, the new star of 1670 didn’t disappear right away. After two years, it faded, then reappeared, then faded for good — which is very unusual for a nova, Kaminski said. And observations have shown that CK Vulpeculae’s temperature is much lower than that of a nova, where the radiation from the nuclear reaction continues to generate heat after the explosion is done, Kaminski said.
The new study, which is detailed in the March 23 edition of the journal Nature, may finally strip CK Vulpeculae of its “nova” title. Kaminski and his co-authors looked at the different molecules present in the wreckage of CK Vulpeculae, and found a profile that they say cannot be created by a classical nova.
But if isn’t a nova, then what is it?
In the new paper, Kaminski and his colleagues argue that CK Vulpeculae is a phenomenon with multiple names in scientific literature. They’ve been called red novas, red transients, luminous red transients and intermediate luminous optical transients (ILOTs), among others.
“People who study these red nova realized all the observations we have of these objects can be explained only if they explode as [an] effect of a collision and a merger of two stars,” Kaminski said.
The notion that a red nova could be a unique category of stellar explosion took hold in 2008, when astronomers watched two stars in a system orbit in toward each other and produce an explosion with the characteristics of a red nova, Kaminski said.
“Many of the novae we know from historical records could be this type; it’s just that people observe them during the outburst, and then no one really cared what happened with them,” Kaminski said. “And that’s why they didn’t realize maybe we’re dealing with some new phenomenon.”
Previous groups have suggested that CK Vulpeculae is a red nova. What Kaminski and his group have provided is the first look at the molecular profile of one of these objects, which he said is distinct from other stellar explosions. [Star Quiz: Test Your Stellar Smarts]
“This is a major step,” he said. The chemical profile shows the presence of molecules and isotopes that are strange compared to other types of stellar explosions, including classical novas, Kaminski said. In fact, the profile is actually somewhat unique among red nova, which may be a product of CK Vulpeculae’s age — perhaps something happens in these red novas over time that produces a unique bouquet of chemicals, he said.
The researchers made their observations with the submillimeter-wavelength Atacama Pathfinder Experiment (APEX) telescope and Effelsberg radio telescope.
Kaminski cautioned that scientists are still working to demonstrate that red novas are, in fact, the products of stellar mergers.
Noam Soker, an astrophysicist at the Technion Israel Institute of Technology, was one of the scientists who previously suggested that CK Vulpeculae was a red nova. He and some of his colleagues have theorized that red novas are not the result of suddenly stellar mergers but rather are produced by the gradual accretion of matter from one star to another. He and Kaminski said one thing that would help clarify the cause of a red nova would be observations inside the clouds of debris, to see the stars that remain there.
Kaminski said that, right now, the available evidence suggests that CK Vulpeculae is a red nova. But it’s possible that in 10 years, someone will come up with a different explanation for how this stellar explosion came to be.
“This is science and astronomy: You propose something new, and everyone is welcome to find supporting evidence, or disprove it with some new theory or new observations,” he said.
Facebook’s Messenger app mostly been used for keeping in touch with friends. Now people can also use it to send each other money. In the future, it could become a platform which other apps could use, if recent rumors prove true.
This Wednesday and Thursday at its F8 conference in San Francisco, Facebook will show off new tools to help third-party developers build apps, deploy them on Facebook and monetize them through Facebook advertising.
Among those tools might be a new service for developers to publish content or features of their own inside Messenger, according to a TechCrunch article. Facebook did not respond to requests for comment.
Such a service could make Messenger more useful, if the right developers sign on. Search features, photo tools or travel functions could be incorporated into Messenger and improve users’ chats around events or activities.
However, Messenger already lets users exchange money, and it also handles voice calls. Layer on more services and Messenger could become bloated and inconvenient to use.
In other words, making Messenger a platform would be a gamble.
A more versatile Messenger could generate new user data Facebook could leverage for advertising, helping it counter a user growth slowdown in recent quarters. It could also boost Facebook’s perennial efforts to increase participants in its developer platform and the number of users of its third-party apps.
Even if Facebook doesn’t turn Messenger into a platform at F8, it will likely do so in the future, said John Jackson, an IDC analyst focused on mobile business strategies. For the same reasons Facebook might turn Messenger into a platform, it could do the same for other apps like WhatsApp or Instagram, he said.
“The objective is to enrich and multiply the nature of interactions on the platform,” providing valuable data along the way, he said.
People working for the Chinese VR Zone have found evidence that Intel will only be launching two Broadwell desktop processors in Q2 2015.
The new Broadwell Desktop CPUs are based on the LGA1150 pin layout and will be compatible with the current Z97 motherboards.
ASUS and ASRock recently announced that their motherboards will be able to handle the new 14nm Broadwell processors with a BIOS update. The two new CPUs will be the Intel Core i7-5775C and Core i5-5675C.
There are some odd things on this list. It is not clear what the C stands for in the product names. Our local AMD fanboy says it stands for C*ap while others have suggested, camel or caramel, depending on how hungry they are. The processors are unlocked for overclocking like the previous K models were and it could the K has somehow become a C.
The new i7 has four cores and eight threads running at a base frequency of 3.3GHz and with a turbo to 3.7GHz while the i5 has a base speed of 3.1GHz and a Turbo of 3.6GHz on its four cores, four threads base. The i7 comes with 6MB cache while the i5 only has 4MB and both are powered by the Intel Iris Pro Graphics 6200 iGPU.
Several U.S. broadband providers have filed lawsuits against the Federal Communications Commission’s recently approved net neutrality rules, launching what is a expected to be a series of legal entanglements.
Broadband industry trade group USTelecom filed a lawsuit against the FCC in the U.S. Court of Appeals for the District of Columbia, which has in the past twice rejected the FCC’s net neutrality regulations.
The group argues the new rules are “arbitrary, capricious, and an abuse of discretion” and violate various laws, regulations and rulemaking procedures.
Texas-based Internet provider Alamo Broadband Inc challenged the FCC’s new rules in the U.S. Court of Appeals for the Fifth Circuit in New Orleans, making a similar argument.
The rules, approved in February and posted online on March 12, treat both wireless and wireline Internet service providers as more heavily regulated “telecommunications services,” more like traditional telephone companies.
Broadband providers are banned under the rules from blocking or slowing any traffic and from striking deals with content companies for smoother delivery of traffic to consumers.
USTelecom President Walter McCormick said in a statement that the group’s members supported enactment of “open Internet” principles into law but not using the new regulatory regime that the FCC chose.
“We do not believe the Federal Communications Commission’s move to utility-style regulation … is legally sustainable,” he said.
Industry sources have previously told Reuters that USTelecom and two other trade groups, CTIA-The Wireless Association and the National Cable and Telecommunications Association, were expected to lead the expected legal challenges.
Verizon Communications Inc, which won the 2010 lawsuit against the FCC, is likely to hold back from filing an individual lawsuit this time around, an industry source familiar with Verizon’s plan has told Reuters.
FCC officials have said they were prepared for lawsuits and the new rules were on much firmer legal ground than previous iterations. The FCC said Monday’s petitions were “premature and subject to dismissal.”
Cisco has revealed details of a new point of sale (PoS) attack that could part firms from money and users from personal data.
The threat has been called PoSeidon by the Cisco team and comes at a time when eyes are on security breaches at firms like Target.
Cisco said in a blog post that PoSeidon is a new threat that has the ability to breach machines and scrape them for credit card information.
Credit card numbers and keylogger data is sent to an exfiltration server, while the mechanism is able to update itself and presumably evade some detection.
Cisco’s advice is for the industry to keep itself in order and network admins to keep systems up to date.
“PoSeidon is another malware targeting PoS systems that demonstrates the sophisticated techniques and approaches of malware authors. Attackers will continue to target PoS systems and employ various obfuscation techniques in an attempt to avoid detection,” said the firm.
“As long as PoS attacks continue to provide returns, attackers will continue to invest in innovation and development of new malware families. Network administrators will need to remain vigilant and adhere to industry best practices to ensure coverage and protection against advancing malware threats.”
The security industry agrees that PoS malware is a cash cow for cyber thieves, highlighting the importance of vigilance and keeping systems up to date.
“PoS malware has been extremely productive for criminals in the last few years, and there’s little reason to expect that will change anytime soon,” said Tim Erlin, director of product management at Tripwire.
“It’s no surprise that, as the information security industry updates tools to detect this malicious software, the authors will continue to adjust and innovate to avoid detection.
“Standards like the PCI Data Security Standard can only lay the groundwork for protecting retailers and consumers from these threats. A standard like PCI can specify a requirement for malware protection, but any specific techniques included may become obsolete as malware evolves.
“Monitoring for new files and changes to files can detect when malware installs itself on a system, as PoSeidon does.”