Subscribe to:

Subscribe to :: ::

Is SAP Losing Steam

October 23, 2017 by  
Filed under Around The Net

The maker of expensive management software, which no-one really knows what it does, SAP has seen its profits take a dip.

The outfit missed market expectations for third quarter profit as it invested heavily to shift business customers into cloud computing.

SAP said it is in the middle of a transition to offering cloud-based services to its business customers and management had flagged that 2017 would see a trough in profit margins as it invested in datacenters and redeployed staff.

The outfit said it should see a recovery next year and had a “very good shot” at stabilizing margins in the fourth quarter. Chief Financial Officer Luka Mucic told a conference call:  “Going into 2018 we see a margin turnaround.”

Revenue for the German business planning software provider grew eight percent to 5.59 billion euro from a year earlier, falling short of the mean forecast of 5.71 billion euro from 16 analysts surveyed by Reuters.

Core profit excluding special items rose by four percent to 1.64 billion euro at constant currency rates, SAP said, below the 1.69 billion euro expected by analysts.

The euro’s strength sliced four percentage points off core profits, which was flat after taking currency moves into account. Analysts at Baader Helvea said they expected currency headwinds to continue for the next three quarters.

The company nudged up guidance for the full year core operating profits to 6.85-7.0 billion euro and said 2017 total revenue would range from 23.4-23.8 billion euro, marking year-to-year growth of around six to eight percent, excluding currency effects.

Cloud subscriptions and support revenue rose 27 percent in the third quarter to 938 million euro, excluding currency effects, compared with the 29 percent analysts had expected, on average.

This was offset by its classic software license and support business revenue, which rose four percent to 3.72 billion euro, slightly above the 2.2 percent growth rate expected by analysts.

Chief Executive Bill McDermott was bullish for the fourth quarter: “We are gaining share against our competitors. SAP is growing faster in the cloud – and we are doing it organically.” During a conference call, he contrasted his company with the the acquisition-fueled growth of its rivals.


Chrome 63 To Start Warning Users Of Man In The Middle Attacks

September 20, 2017 by  
Filed under Around The Net

Google Chrome 63 will warn users when third-party software is performing a Man-in-the-Middle (MitM) attack that hijacks their internet connection. 

A MitM attack happens when a communication between two systems is intercepted by a malicious actor through an application installed on a user’s computer, enabling them to send, alter and receive data meant for someone else.

It isn’t an easy attack to perpetrate, mainly because many MitM toolkits fail to correctly rewrite the user’s encrypted connections, causing SSL errors that Chrome can detect.

In Chrome 63, Google is introducing a new warning screen whenever the browser detects a large number of SSL connection errors within a short timeframe. This is a signal that an attacker is attempting to intercept the user’s web traffic, albeit with no success.

Errors can come from applications such as anti-virus software and firewalls, as well as from malware. But Chrome will filter the warning sign to only show up for software that has failed to rewrite SSL connections properly.

Chrome 63 is scheduled to be released on 5 December, according to the Chromium Development Calendar, and users can preview it through the Google Chrome development branch known as Google Canary.

The new security feature was developed by Sasha Perigo, a Stanford University student who interned at Google, working with the team responsible for Chrome. It isn’t enabled by default in Google Canary, but can be turned on manually.

Last month, Google revealed that it had developed a tool that lets users permanently mute websites that automatically play videos with sound.

The feature is currently only available in Google Canary as Google’s developers are still experimenting with it, but the company is likely to introduce the tool to Chrome users in the coming months. 

The features come as the so-called ‘browser wars’ start to hot up, once again, with Vivaldi offering a feature-packed alternative to Chrome and Opera, and Microsoft seeking to entice Windows 10 users to Edge.


Did Russian Hackers Breach U.S. And European Power Grids

September 15, 2017 by  
Filed under Around The Net

Symantec has claimed that Russian-linked hackers have targeted and successfully penetrated a power grid networks in the US and Europe.

The attacks bear the hallmarks of a hacking group that Symantec calls Dragonfly, which the company believes is a front for a state-led hacking operation. The company implied – but didn’t explicitly state – that Dragonfly is connected with Russia.

“The Dragonfly group appears to be interested in both learning how energy facilities operate and also gaining access to operational systems themselves, to the extent that the group now potentially has the ability to sabotage or gain control of these systems should it decide to do so,” claimed Symantec.

Symantec issued a research note on Dragonfly in June 2014, claiming that they had “managed to compromise a number of strategically important organisations for spying purposes and, if they had used the sabotage capabilities open to them, could have caused damage or disruption to energy supplies in affected countries”.

The company suggested that Dragonfly was targeting energy grid operators, major electricity companies, oil pipeline operators and industrial equipment providers to the energy industry. The majority of the victims were located in the United States, Spain, France, Italy, Germany, Turkey, and Poland, it added.

Symantec claims that Dragonfly activity died down after it had been exposed in 2014, but restarted in December 2015, ratcheting up from around April this year.

“As it did in its prior campaign between 2011 and 2014, Dragonfly 2.0 uses a variety of infection vectors in an effort to gain access to a victim’s network, including malicious emails, watering hole attacks, and Trojanised software,” claimed Symantec in its latest report.

“The earliest activity identified by Symantec in this renewed campaign was a malicious email campaign that sent emails disguised as an invitation to a New Year’s Eve party to targets in the energy sector in December 2015.”

The group conducted further malicious email phishing campaigns during 2016 and 2017. “The emails contained very specific content related to the energy sector, as well as some related to general business concerns. Once opened, the attached malicious document would attempt to leak victims’ network credentials to a server outside of the targeted organisation,” it added.

Intriguingly, perhaps, the Dragonfly group was observed attempting to subvert legitimate software in order to deliver malware to victims – a tactic deployed in June’s NotPetya malware outbreak in which the software update servers of a Ukrainian accounting software company were compromised to deliver a Trojanised software update.

That attack had also been linked with the Russian state, with the malware absorbing some of the leaked US National Security Agency (NSA) exploits before the Shadow Brokers group, which claimed responsibility for cracking the server on which they had been hosted, had publicly released them.

The group is using the evasion framework Shellter to develop Trojanised applications, Symantec added.

y enhete dmeidt ctiov dao wenclnoiavdn oacn outp dgantier efeonri gtnhee ilra iFcloass hg npilsauy eyrb. sSphaohrrtelpy— sakfrtoewrt evni stietgirnagt soptencoi fsirco oUdRkLcsa,b as ufoiilcei lnaamm eldl a”tisnnsit aoltl _dfelsaus he_bp lyaayme rs.eetxaed”p uw ahss asleFe n] eobno dvAi[c tsiam gcnoimdpaurteeurqss,a mf oslelloiwfe dt ashhto rttsleyg gbuys tohte eTcrnoejdainv.eK asraahg aonsyl.aB cbeatcnkadmoyoSr”.

nsit aoltl maedhdti twioolnlaal dtnoao lsss eicfc an eecteosmsearr ym.e hGto oedvoirg, oKta rsargeatnuyp.mBo,c amnidt cDiovr sohtenlo asrreo oedxkacmapbl eosw to fr ob aecnkod ololrast sunsie dl,l iawl osnrge kwciattht aT reohjta n,.yHlelraicpilpoyrT.””

While cyber attacks on infrastructure can be perpetrated with the intention of sabotage, the latest Dragonfly campaign appears to be reconnaissance, claimed Symantec.

“While Symantec cannot definitively determine Dragonfly’s origins, this is clearly an accomplished attack group.

“It is capable of compromising targeted organizations through a variety of methods; can steal credentials to traverse targeted networks; and has a range of malware tools available to it, some of which appear to have been custom developed. Dragonfly is a highly focused group, carrying out targeted attacks on energy sector targets since at least 2011, with a renewed ramping up of activity observed in the last year.”

y enhete dmeidt ctiov dao wenclnoiavdn oacn outp dgantier efeonri gtnhee ilra iFcloass hg npilsauy eyrb. sSphaohrrtelpy— sakfrtoewrt evni stietgirnagt soptencoi fsirco oUdRkLcsa,b as ufoiilcei lnaamm eldl a”tisnnsit aoltl _dfelsaus he_bp lyaayme rs.eetxaed”p uw ahss asleFe n] eobno dvAi[c tsiam gcnoimdpaurteeurqss,a mf oslelloiwfe dt ashhto rttsleyg gbuys tohte

nsit aoltl maedhdti twioolnlaal dtnoao lsss eicfc an eecteosmsearr ym.e hGto oedvoirg, oKta rsargeatnuyp.mBo,c amnidt cDiovr sohtenlo asrreo oedxkacmapbl eosw to fr ob aecnko

While cyber attacks on infrastructure can be perpetrated with the intention of sabotage, the latest Dragonfly campaign appears to be reconnaissance, claimed Symantec.

“While Symantec cannot definitively determine Dragonfly’s origins, this is clearly an accomplished attack group.

“It is capable of compromising targeted organizations through a variety of methods; can steal credentials to traverse targeted networks; and has a range of malware tools available to it, some of which appear to have been custom developed. Dragonfly is a highly focused group, carrying out targeted attacks on energy sector targets since at least 2011, with a renewed ramping up of activity observed in the last year.”


Is Data Mining Sending GPU Prices Through The Roof

September 5, 2017 by  
Filed under Computing

Consumer demand for graphics cards may be undermined by price hikes arising from the GDDR memory shortage.

According to Digitimes, first-tier vendors are expected to raise their Nvidia GeForce GTX 1080/1070/1060/1050 graphics card pricing by three to 10 percent at the end of August.

From April to mid-July, the cryptocurrency mining segment mainly contributed graphics card to sales.

With demand from the segment starting to cool off since mid-July and graphics cards’ supply and pricing both stabilizing, sales from the retail channel have started picking up.

But the stable pricing may not last long because price hikes caused by memory shortages could again deter consumer demand.

Samsung and SK Hynix have cut their memory supply for the graphics card segment, August quotes for RAM used in graphics cards have risen to US$8.50, up by 30.8 per cent from US$6.50 in July.

Both memory suppliers have allocated more of their production capacities to making memories for servers and handsets, reducing output for the graphics cards segment and fueling the price rally.


Are Russian Hackers Targeting Our Nuclear Sites

July 17, 2017 by  
Filed under Around The Net

US nuclear facilities, their suppliers and manufacturing plants using phishing methods, US authorities have said.

Last week the US Department of Homeland Security and the FBI released a joint report into recent attacks, including one on Kansas-based nuclear power station operator Wolf Creek Nuclear Operating Corporation. The report was obtained by the New York Times.

The networks of Wolf Creek and other key infrastructure companies were said to have been infiltrated. The attackers appeared to be on a reconnaissance mission, seeking to understand the workings of the networks, possibly laying the groundwork for a future assault.

The authorities blamed an “advanced persistent threat” actor for the activity, which is usually taken to mean a state-sponsored group.

However, quoting unnamed sources, the NYT says the methodology deployed by the attackers is similar to the modus operandi of the Russian group “Energetic Bear” which has been blamed for hacking energy facilities and other key targets including financial institutions since 2012.

In the recent wave of attacks, which began in May, the attackers deployed spear-phishing techniques, emailing fake CVs with a malware payload to senior control engineers authorized to access the industrial control systems. The malware was designed to harvest user credentials and passwords, the report says. Other techniques involved man-in-the-middle and watering hole attacks that are using compromised legitimate websites known to be visited frequently by the targets.

While the joint DHS-FBI report carries an ‘amber’ threat warning, the industry appears to be downplaying the seriousness of the hackers’ activities.

Nuclear Energy Institute spokesperson John Keeley said that nuclear facilities are required by law to report cyberattacks but that none of the 100 or so facilities covered by the Institute have said that their security was compromised.

Meanwhile, in a joint statement with the FBI, a spokesman for the Department of Homeland Security said, “There is no indication of a threat to public safety, as any potential impact appears to be limited to administrative and business networks.”

The US Department of Energy also said the impact appears limited to administrative and business networks.

“Regardless of whether malicious actors attempt to exploit business networks or operational systems, we take any reports of malicious cyber activity potentially targeting our nation’s energy infrastructure seriously and respond accordingly,” a spokesperson told Bloomberg.


Can Quantum Entanglement Stop Hacking

June 29, 2017 by  
Filed under Around The Net

A Chinese satellite has split pairs of “entangled photons” and transmitted them to separate ground stations 745 miles (1,200 kilometers) apart, smashing the previous distance record for such a feat and opening new possibilities in quantum communication.

In quantum physics, when particles interact with each other in certain ways they become “entangled.” This essentially means they remain connected even when separated by large distances, so that an action performed on one affects the other.

In a new study published online today (June 15) in the journal Science, researchers report the successful distribution of entangled photon pairs to two locations on Earth separated by 747.5 miles (1,203 km).

Quantum entanglement has interesting applications for testing the fundamental laws of physics, but also for creating exceptionally secure communication systems, scientists have said. That’s because quantum mechanics states that measuring a quantum system inevitably disturbs it, so any attempt to eavesdrop is impossible to hide.

But, it’s hard to distribute entangled particles — normally photons — over large distances. When traveling through air or over fiber-optic cables, the environment interferes with the particles, so with greater distances, the signal decays and becomes too weak to be useful.

In 2003, Pan Jianwei, a professor of quantum physics at the University of Science and Technology of China, started work on a satellite-based system designed to beam entangled photon pairs down to ground stations. The idea was that because most of the particle’s journey would be through the vacuum of space, this system would introduce considerably less environmental interference.

“Many people then thought it [was] a crazy idea, because it was very challenging already doing the sophisticated quantum-optics experiments inside a well-shielded optical table,” Pan told Live Science. “So how can you do similar experiments at thousand-kilometers distance scale and with the optical elements vibrating and moving at a speed of 8 kilometers per second [5 miles per second]?”

In the new study, researchers used China’s Micius satellite, which was launched last year, to transmit the entangled photon pairs. The satellite features an ultrabright entangled photon source and a high-precision acquiring, pointing and tracking (APT) system that uses beacon lasers on the satellite and at three ground stations to line up the transmitter and receivers.

Once the photons reached the ground stations, the scientists carried out tests and confirmed that the particles were still entangled despite having traveled between 994 miles and 1,490 miles (1,600 and 2,400 km), depending on what stage of its orbit the satellite was positioned at.

Only the lowest 6 miles (10 km) of Earth’s atmosphere are thick enough to cause significant interference with the photons, the scientists said. This means the overall efficiency of their link was vastly higher than previous methods for distributing entangled photons via fiber-optic cables, according to the scientists.

“We have already achieved a two-photon entanglement distribution efficiency a trillion times more efficient than using the best telecommunication fibers,” Pan said. “We have done something that was absolutely impossible without the satellite.”

Apart from carrying out experiments, one of the potential uses for this kind of system is for “quantum key distribution,” in which quantum communication systems are used to share an encryption key between two parties that is impossible to intercept without alerting the users. When combined with the correct encryption algorithm, this system is uncrackable even if encrypted messages are sent over normal communication channels, experts have said.

Artur Ekert, a professor of quantum physics at the University of Oxford in the United Kingdom, was the first to describe how entangled photons could be used to transmit an encryption key.

“The Chinese experiment is quite a remarkable technological achievement,” Ekert told Live Science. “When I proposed the entangled-based quantum key distribution back in 1991 when I was a student in Oxford, I did not expect it to be elevated to such heights!”

The current satellite is not quite ready for use in practical quantum communication systems, though, according to Pan. For one, its relatively low orbit means each ground station has coverage for only about 5 minutes each day, and the wavelength of photons used means it can only operate at night, he said.

Boosting coverage times and areas will mean launching new satellites with higher orbits, Pan said, but this will require bigger telescopes, more precise tracking and higher link efficiency. Daytime operation will require the use of photons in the telecommunications wavelengths, he added.

But while developing future quantum communication networks will require considerable work, Thomas Jennewein, an associate professor at the University of Waterloo’s Institute for Quantum Computing in Canada, said Pan’s group has demonstrated one of the key building blocks.


Is nVidia Investing A Lot Of Money Into A.I. Start-Ups?

May 3, 2017 by  
Filed under Computing

Nvidia is telling the world before its GPU Technology conference in San Jose on May 8-May 11.

Writing in his bog, Nvidia’s vice president of business development Jeff Herbst said Nvidia will provide technical guidance, joint marketing help, strategic direction, and other aid. Nvidia made the investments through its GPU Ventures program.

The start-ups include Abeja, a Tokyo-based startup focused on AI-powered retail analytics systems; Datalogue, a New York AI data-mining platform; Optimus Ride an MIT spinoff looking at autonomous vehicles; SoundHound, which makes voice-enabled AI solutions; TempoQuest — Bouldera, which is creating GPU-accelerated weather forecasting and Zebra Medical an Israeli-based start-up using AI to read medical images.

Nvidia also made its third investment in MapD, which uses GPUs to query massive databases and is about to announce another AI startup investment.


Will The A.I. Space Heat-Up This Year?

April 13, 2017 by  
Filed under Computing

Major component manufacturers in the artificial intelligence (AI) market have all increased their efforts to develop more aggressive processors for AI-fueled markets in 2017 including autonomous vehicles, enterprise drones, medical care, smart factories, image recognition, and general neural network research and development.

Intel’s Nervana platform is a $400 million investment in AI

Back in November, Intel announced what it claims is a comprehensive AI platform for data center and compute applications called Nervana, with its focus aimed directly at taking on Nvidia’s GPU solutions for enterprise users. The platform is the result of the chipmaker’s acquisition of 48-person startup Nervana Systems back in August for $400 million that was led by former Qualcomm researcher Naveen Rao. Built using FPGA technology and designed for highly-optimized AI solutions, Intel claims Nervana will deliver up to a 100-fold reduction in the time it takes to train a deep learning model within the next three years.

The company intends to integrate Nervana technology into Xeon and Xeon Phi processor lineups. During Q1, it will test the Nervana Engine chip, codenamed ‘Lake Crest,’ and make it available to key customers later within the year. The chip will be specifically optimized for neural networks to deliver the highest performance for deep learning, with unprecedented compute density and a high-bandwidth interconnect.

For its Xeon Phi processor lineup, the company says that its next generation series codenamed “Knights Mill” is expected to deliver up to four times better performance for deep learning. Intel has also announced a preliminary version of Skylake-based Intel Xeon processors with support for AVX-512 instructions to significantly boost performance of inference tasks in machine learning workloads.

“Intel is committed to AI and is making major investments in technology and developer resources to advance AI for business and society,” said Intel CEO Brian Krzanich.

Nvidia partners with Microsoft on AI cloud computing platform

Earlier last month, Nvidia showed no signs of slowing its AI cloud computing efforts by announcing a partnership with Microsoft for a hyperscale GPU accelerator called HGX-1. The partnership includes integration with Microsoft Project Olympus, an open, modular, very flexible hyperscale cloud hardware platform that includes a universal motherboard design (1U/2U chassis), high-density storage expansion, and a broad ecosystem of compliant hardware products developed by the OCP community.

Nvidia claims that HGX-1 establishes an industry standard for cloud-based AI workloads similar to what the ATX form factor did for PC motherboards more than two decades ago. The HGX-1 is powered by eight Tesla P100 accelerators connected through NVLink and the PCI-E standard. Nvidia’s hyperscale GPU accelerator will, it claims, allow cloud service providers to easily adopt Tesla and Quadro accelerator cards to meet the surging demand for AI computing. The company plans to host another GPU technology conference in May 2017 where it is expected to unveil more updates on its AI plans.

On the consumer front, Nvidia’s Shield platform integrates with Amazon Echo, Nest and Ring to provide customers with a “connected home experience”, while Spot is its direct take on Amazon Echo and brings ambient AI assistance into the living room. For drivers, the company’s latest AI car supercomputer is called Xavier and is powered by an 8-core custom ARM64 processor, and a 512-core Volta-based GPU. The unit is designed with ASIL D safety rating, the highest classification of initial hazard, and can deliver 30 tera ops of double-precision learning in a 30W design.

Qualcomm’s acquisition of NXP signals investment in AI market

Back in October, San Diego-based Qualcomm bought NXP, the leader in high-performance, mixed-signal semiconductor electronics – and a leading solutions supplier to the automotive industry – for $47 billion. The two companies, joined into a single entity, have now represented what is considered a strong contender in automotive, IoT, and security and networking industries. Using several automotive safety sensor IPs from the acquisition, including radar microcontroller units (MCUs), anti-lock braking systems, MCU-enabled airbags, and real-time tire pressure monitoring, Qualcomm is now positioned to be a “one-stop solution” for many automotive customers.

With its Snapdragon and Adreno graphics capabilities, the company is well positioned to compete with Nvidia in the automotive market and stands a much better chance of developing its self-driving car platform with the help of NXP and Freescale IP in its product portfolios.

AMD targets AI learning workloads with Radeon Instinct accelerators

Back in December, AMD also announced its strategy to dramatically push its presence into the AI-related server computing business with the launch of new Radeon Instinct accelerator cards and MIOpen, a free, comprehensive open-source library for developing deep learning and machine intelligence applications.

The company’s Radeon Instinct series is expected to be a general-purpose solution for developing AI-based applications using deep learning frameworks. This includes autonomous vehicles, HPC, nano-robots, personal assistance, autopilot drones, telemedicine, smart home, financial services and energy, among other sectors. Some analysts note, however, that AMD is the only company uniquely positioned with both x86 and GPU processor technologies, allowing it to fulfill the role of meeting a variety of data center solutions on demand. The company has been developing what it claims is an efficient connection between both application processor types to meet the growing needs of AI’s technological demands.

The MIOpen deep learning library was expected to be released in Q1, but may have been delayed by a couple of weeks. Meanwhile, AMD’s Radeon Open Compute (ROCm) Platform lets programmers focus on training neural networks through Caffe, Torch 7, and Tensorflow, rather than wasting time doing mundane, low level, performance tuning tasks.


Is Samsung Going All-In On A.I.?

March 1, 2017 by  
Filed under Computing

Samsung may be about to write a billion dollar cheque to pick up some artificial intelligence technology.

The billion will not just be used for acquisitions, but also to invest in companies involved in AI.

This is in addition to what Samsung has already bought, including the acquisition of Viv Labs, an AI company from the team behind Apple’s Siri, plus the many references to its own AI assistant coming soon, which we currently know as Bixby.

Samsung also recently contributed to SoundCloud’s recent funding round, focusing on development of its Houndify AI platform. Joining the Catalyst program is Samsung Next, a $150 million fund for startups specialising in VR, the Internet of Things, and artificial intelligence.

We have heard all this before. Last year Samsung’s head of software research and development told Bloomberg:  “We are actively looking for M&A targets of all sorts in the software area. We are open to all possibilities, including artificial intelligence. Intelligence is no longer an option. It’s a must.”

However, this is first time we’re hearing about a cash figure linked to Samsung’s interest in AI, and it’s big enough to show that the outfit is serious. The first move will come with the Galaxy S8, which is expected to feature the Bixby, an AI assistant. Of course, the Tame Apple Press claims this is all because Samsung is envious of Apple’s super cool Siri, even if that AI is looking rather out of date now.


Researchers Show How Heartbeats Can Be Used As Passwords

January 23, 2017 by  
Filed under Around The Net

Researchers at Binghamton State University in New York believes your heart could be the vital to your personal data. By measuring the electrical activity of the heart, researchers say they can encrypt patients’ health records.

The fundamental idea is this: In the future, all patients will be outfitted with a wearable device, which will continuously collect physiological data and transmit it to the patients’ doctors. Because electrocardiogram (ECG) signals are already collected for clinical diagnosis, the system would simply reuse the data during transmission, thus reducing the cost and computational power needed to create an encryption key from scratch.

“There have been so many mature encryption techniques available, but the problem is that those encryption techniques rely on some complicated arithmetic calculations and random key generations,” said Zhanpeng Jin, a co-author of the paper “A Robust and Reusable ECG-based Authentication and Data Encryption Scheme for eHealth Systems.”

Those encryption techniques can’t be “directly applied on the energy-hungry mobile and wearable devices,” Jin added. “If you apply those kinds of encryptions on top of the mobile device, then you can burn the battery very quickly.”

But there are drawbacks. According to Jin, one of the reasons ECG encryption has not been widely adopted is because it’s generally more sensitive and vulnerable to variations than some other biometric measures. For instance, your electrical activity could change depending on factors such as physical exertion and mental state. Other more permanent factors such as age and health can also have an effect.

“ECG itself cannot be used for a biometric authentication purpose alone, but it’s a very effective way as a secondary authentication,” Jin said.

While the technology for ECG encryption is already here, its adoption will depend on patients’ willingness to don wearables and on their comfort with constantly sharing their biometrics.

Researchers Develop Smartphone Sensor That Detects Cancer

November 3, 2016 by  
Filed under Mobile

wsu-cancer-sensor-150x150Researchers at Washington State University have developed a portable sensor that utilizes smartphones cameras to detect a biological indicator for several types of cancers with 99% accuracy, yielding laboratory quality results.

The sensor, a light spectrometer, can process up to eight blood or tissue samples at the same time (or one sample in eight wells) and can detect the human protein  interleukin-6 (IL-6). That protein is a known biological marker for lung, prostate, liver, breast and epithelial cancers.

“At a time when patients and medical professionals expect always faster results, researchers are trying to translate biodetection technologies used in laboratories to the field and clinic, so patients can get nearly instant diagnoses in a physician’s office, an ambulance or the emergency room,” the researchers said in a statement.

A spectrometer analyzes the amount and type of chemicals in a sample by measuring the light spectrum. The research was published in the journal Biosensors and Bioelectronics.

While smartphone spectrometers exist today, the WSU researchers said the eight-channel smartphone spectrometer is unique, and inexpensive to produce — about $150.

A custom smartphone multi-view app uses the phone’s built-in camera and was developed to control the optical sensing parameters and to align each sample to the corresponding spectrometer channel. The captured images are converted into a  spectrum in the visible wavelength range.

The initial cancer spectrometer was created for an iPhone 5, but it can be adjusted to work with any smartphone, according to Lei Li, an assistant professor in WSU’s School of Mechanical and Materials Engineering. Li, who led the research team, also filed a provisional patent for the work.

“With our eight-channel spectrometer, we can put eight different samples to do the same test, or one sample in eight different wells to do eight different tests. This increases our device’s efficiency,” Li said.


SAP Buys Could Up-Start Altiscale

October 4, 2016 by  
Filed under Computing

SAP the esoteric business software outfit which makes expensive business software which no one can be certain what it does, has just bought the cloud start-up Altiscale.

SAP said that Altiscale offers cloud based versions of the Hadoop and Spark open source software for storing, processing and analysing different types of data. It is thought that the deal was worth about $125 million but this is mostly guessing.

Altiscale has published a blog post to let its customers know that it will become a part of SAP. Apparently SAP wants to harness its technology:

“Altiscale is a natural fit for SAP, as we share our overall focus of helping enterprises derive business value from data — and successfully use big data. Since Altiscale is a leader in big data-as-a-service based on Hadoop and Spark, it enables SAP to drive end-to-end value in Big Data across the technology, data platform PaaS (platform as a service), analytics, and application stack”.

Raymie Stata, Altiscale cofounder and chief executive, notes that the startup will focus on integrating its technology with SAP and will also work on SAP strategy around data and platform.
Altiscale flogs its stuff to Accel Partners, AME Cloud Ventures, Northgate, General Catalyst Partners, Sequoia Capital and Wildcat Venture Partners.


Japan Appears To Be Going All-In On Artificial Intelligence

September 20, 2016 by  
Filed under Around The Net

The Japanese government is set to spend $974 million on a 10 year project to make artificial intelligence (AI) a reality.

According to, the Riken Center for advanced integrated intelligence research will open near Tokyo railway station and involve 100 researchers from companies including Toyota and NEC.

The aim is to create AI that will match the intelligence of an average human being by the middle of this century.

While previous attempts to create viable AI have foundered, the wire reports that high capacity computing, processing of big data and automated processing techniques will make it a viable technology.

The researchers will utilise so-called “deep learning”. It’s reported that an IBM Watson cognitive system can help diagnose treatment for people with cancer that are better than human doctors can devise.


Is ARM’s Supercomputer Getting Delayed

September 9, 2016 by  
Filed under Computing

Fujitsu’s ARM-powered supercomputer, the Post-K, is not going to be ready for 2020 as planned.

Apparently there are a few design issues with the $910m project and engineers need to get under the bonnet and tinker a bit more.

Japan’s RIKEN research group hired Fujitsu to come up with a 1,000 peta-FLOPS supercomputer to supersede the nation’s K Computer.

Post-K was supposed to be eight times faster than today’s most powerful known supercomputer in the world, China’s Sunway TaihuLight. It would be used to model climate change without cross referencing its results with the Bible, and run scientific simulations.

What was more interesting was the fact that it was using ARMv8-A architecture rather than the SPARC64 VIIIfx chips in the older machine. Rumours are though that the it will be one to two years late.

ARM and Fujitsu were flat out working out the large vector instructions to ARMv8, bringing the architecture up to scratch for supercomputer applications. However not it seems that the delays are caused by using the new CPU semiconductor technology. As a result, the time required for making system prototypes and detailed designs has been extended. It is not clear if the issue is the move from SPARK to ARM or if it is shrinking designs down to 10nm.

The ARM processors were supposed to be the 10nm FinFET chips fabricated by TSMC, and will feature high-bandwidth memory and the Tofu 6D interconnect mesh  that’s used in the K Computer. It could be that getting decent yields of processors including all these bells and whistles is going to be harder than expected.

This design work was supposed to have been done and dusted by early 2018 and some executive was supposed to throw the switch in 2020. This would have hacked off the US government which was not expecting to be able to build an exascale supercomputer: the US government didn’t expect to be able to build one until 2023.


IBM Develops A Lap-On-A-Chip

August 8, 2016 by  
Filed under Computing

IBM has developed some tech it calls a lab-on-a-chip which can be used to detect cancer or other diseases before any symptoms have appeared.

The tech allows for the separation of bioparticles down to a size of 20nm diameter which means that particles such as DNA, exosomes and viruses can be separated for analysis.

This means that diseases can be detected before any outward signs are visible and means that patients have a far better chance of recovery thanks to early treatment.

The company notes that it’s developing this technology in conjunction with the Icahn School of Medicine in New York, and the first step in trialling will be to test it detecting prostate cancer in the US.

The lab-on-a-chip can analyse liquid biopsies from patients and is capable of detecting exosomes with cancer-specific biomarkers. Exosomes, by the way, are vesicles – a small structure within a cell – which are present in bodily fluids such as saliva and urine, and they’re being seen as increasingly important in the diagnosis of malignant tumours.

The big idea is to reach a situation where a simple home diagnostic chip could allow people to regularly monitor their health via urine samples.

Gustavo Stolovitzky, Program Director of Translational Systems Biology and Nanobiotechnology at IBM Research, commented: “The ability to sort and enrich biomarkers at the nanoscale in chip-based technologies opens the door to understanding diseases such as cancer as well as viruses like the flu or Zika. Our lab-on-a-chip device could offer a simple, noninvasive and affordable option to potentially detect and monitor a disease even at its earliest stages, long before physical symptoms manifest.”



Next Page »