Last month at Intel’s 2016 Developer Forum in Shenzhen, China, the chip giant proposed a broad market transition to remove all 3.5mm audio jacks from the phone and tablet industry and gradually replace them with USB-based audio solutions and Conexant has become the first partner company to embrace the standard with its announcement of two new audio chips, the CX20985 and the CX20899.
Conexant is the first company to announce chips designed after Intel’s USB-C Digital Audio standard, according to a recent post by AnandTech. Although the USB Type-C standard traditionally supports up to 10Gbps bidirectional bandwidth, Conexant’s CX20985 and CX20899 chips will start by supporting USB 2.0 bandwidth up to 480Gbps. This bandwidth rate should technically still give plenty of headroom for high-fidelity audio transmissions along the 24-bit range with any additional functionality such as equalizer customization and room correction adjustments.
Intel’s standard was made public in late April and is a result of the company’s ambitions to lead the industry toward a unified digital interconnect for at least three purposes – data transfer, charging (up to 100 watts), and now audio. Of course, USB Type-C is a universal all-in-one format that also supports video signals over DisplayPort Alternate Mode (up to 8.1Gbps per lane) but any video transfer capability over USB-C versus alternative formats such as MHL is ultimately left to the discretion of mobile device manufacturers.
The main premise behind USB-C Digital Audio is that Intel wants to use USB-C as the universal port that it was designed to become. This includes audio transmissions to headphones, docking stations, health-monitoring headsets, car stereos, soundboards, wearables, and many other digital playback devices in the long run of the format. The traditional 3.5mm headphone jack has been around since the 1960s, and while it’s a very reliable and trusted connector, it only serves the purpose of providing 2-channel analog stereo audio signals. There is also no possible conversion into 5.1 or 7.1 formats, with the most advanced option being “matrixed” Dolby Surround with decoding performed on the device side.
Although no one has currently succeeded in replacing the ubiquitous 3.5mm audio standard, Intel hopes that some of the “smart standard” features proposed in its USB-C Digital Audio specification will be enough to convince even the most diehard analog audio purists that there might be a digital alternative finally worth adopting. Of course, the concept is not new. In the early 2000s, Motorola and others used mini USB connectors for data transfers, charging and headset connectivity. In 2016, Intel wants to do the same thing with USB-C, only this time there will be enough bandwidth to drive premium lossless audio content along while allowing headsets to perform digital-to-analog conversions and offload all the amperage to make it possible.
The CX20985 features a 24-bit DAC with sampling rates up to 48kHz and a stereo ADC for music and voice applications, including Skype for Business and Google’s basic Android Wired Audio Headset 1.1 specification. The chip features a five-band parametric equalizer for playback, two-band equalizer for recording, and an integrated capless headphone driver that eliminates the need for AC coupling capacitors. The CX20985 also features very low idle power consumption of just 22.6mA with voltage between 4.35 and 5.25V. Conexant expects its headphone-optimized USB-C digital audio chip to hit mass production in July using a 6 x 6mm 50-pin QFN package.
Conexant CX20899 – designed for high-end headsets and docking stations
The CX20899 features a full-fledged DSP with sampling rates up to 96kHz and a stereo ADC with the same 96KHz sampling rate for playback over differential “line-out” or a capless headphone driver. The DSP supports a variety of more advanced functions including digital room correction, microphone automatic gain control (AGC), acoustic echo cancelling and a programmable equalizer, among others. The CX20899 also supports PCM/I2S and S/PDIF outputs for docking stations and higher-end mobile equipment over USB-C connections. While more power-hungry, this chip also features reasonable idle power consumption of just 1.8mA with voltage between 4.35 and 5.25V. Conexant says this chip is already in mass production and uses a 76-pin QFN package.
Conexant has traditionally been an early adopter of USB audio formats and has released at least eight current USB-based solutions over the past two years, typically costing around $1 per unit. In June 2014 the company announced the CX20562, a USB DSP codec with Class-D amplifier (48-pin QFN package), capable of driving 1.2 watts on a 4-ohm signal through USB host power. In August 2014, it announced the CX2077x, a 24-bit USB DSP codec system-on-chip with integrated PWM LED drivers. Two other interesting products released the same year were the CX2087x and CX20833, featuring USB-based Dolby and DTS headphone decoders with Class-D amplifiers. We expect the company’s new CX20985 and CX20899 USB-C chips to become available for device manufacturers to use sometime towards the end of the year, with initial products being unveiled next January at CES.
Intel’s USB-C Digital Audio specification contains two side-band pins, SBU1 and SBU2, that can transfer analog audio signals when a host device is put into “audio adapter accessory” mode. One journalist from AndroidAuthority.com notes that this solution may not be favored by audio purists as it puts potentially-cleaner analog audio signals next to noisier digital audio pathways. While this can be a concern for some on paper, the results in real-world testing could be drastically different with the migration of multi-function processing units (MPUs) from mobile devices onto headphones and headsets themselves.
Currently, many smartphones and other mobile devices contain MPUs that support features like non-linear processing, acoustic echo cancellation, noise suppression and beamforming, among other things. If Intel is offloading these features onto headphone and headsets, we can definitely expect digital audio accessory prices to rise significantly, with the presumption that they will integrate “smart features” that were previously reserved for on-board smartphone and tablet codecs and DACs.
The other concern is industry-wide adoption. Currently, there is a rather partisan split between Apple and Google (all Android-enabled devices) on mobile industry standards and connection peripherals. In 2012, Apple made a practical yet shameless move of launching its own Lightning standard as an alternative to USB, knowing full well that the market would quickly splinter rather than unify around a single data and charging protocol going forward over the next decade. Three years later, USB-C was announced, albeit three years too late to recover from the observable standards fracture that Apple had willingly placed on the mobile industry.
If Apple and Android smartphone manufacturers choose to replace the 3.5mm headphone jack with a USB-C solution, then they would be obligated to include USB-C to 3.5mm adapters with every new device in order to avoid backlash from users with traditional audio equipment.
It is unlikely that Apple will do this, as the company is known for charging every last penny on adapters that should often be included with new device purchases – $29 for its Thunderbolt to Gigabit Ethernet adapter, $19 for its USB-C to USB adapter and $25 for its USB-C to Lightning Cable. If Apple continues to market Lightning headphones as a viable alternative to USB-C, there is guaranteed to be another format war and one in which Apple’s prospects are considerably outnumbered by Android users.
On the other hand, if Android manufacturers completely remove 3.5mm headphone jacks, replace them with USB-C only solutions and do not provide adapters, then this would only incite users of traditional audio equipment to become angry at purchasers of newer USB-C audio solutions. Traditional 3.5mm users would then be required to purchase adapters at the expense of users purchasing newer USB-C equipment.
The solution, of course, is to gradually phase out the existence of 3.5mm audio jacks rather than immediately replacing them or requiring millions of users to purchase adapters. Intel, AMD, Samsung, Dell, Lenovo and others announced plans to phase out VGA and LVDS connectors as early as Q4 2010, and they eventually were removed from the most device manufacturing processes between 2013 and 2015, respectively.
The only issue with an industry-wide adoption proposal is that Apple would need to plan an exit strategy for its Lightning connector and embrace USB-C in its next-generation mobile products due in a year and a half from now. Whether not the company wants to accept humility for its actions with Lightning in 2012 is a concern that will be left to company executives and investors, among all other private circles involved in the matter.
Intel’s new mobile USB-C digital audio standard is due to be finalized before the end of June and is ultimately designed to support more recent audio formats and features, improve internal power management, and add new discovery and configuration models to enable “simpler” devices. New devices adopting the standard are expected to gain additional features including a thermal sensor for earpieces. This may prove useful for measuring temperature during workouts and providing some in-ear biometric data back to a connected mobile device.
Asus has dropped a bomb ahead of the Computex 2016 by teasing Nvidia’s upcoming mobile Geforce GPU which is actually faster than desktop GTX Titan X.
Teased at its Republic of Gamers website, Asus did not reveal a lot of information but it did show a screenshot of the 3DMark 11 where the new mystery GPU scores above GTX Titan X.
While it is obviously based on Nvidia’s new Pascal GPU architecture, there are few information regarding the new mystery GPU, but rumors suggest that this might be the GTX 1080m.
Nvidia might have gone the same way it did with the GTX 980 for notebooks and has released a fully-enabled GP104 GPU with 2560 CUDA cores as the score is pretty much identical to the desktop GTX 1080. The GTX 1080m could end up with a GDDR5 memory and not the GDDR5X as the desktop GTX 1080, which should make it more similar to the upcoming GTX 1070.
According to the results, the new mobile GPU scores P20811 in 3DMark 11, which is significantly higher than the GTX 980 Ti and even a higher score than GTX Titan X.
We will certainly keep an eye on this mystery GPU and hopefully Asus will reveal a bit more information before Computex 2016 show where the official announcement is expected.
ARM’s collaboration with TSMC has finally born some fruit with the tapeout of a 10nm test chip to show off the company’s readiness for the new manufacturing process.
The new test chip contains ARM’s yet-to-be-announced “Artemis” CPU core which is named after a goddess who will turn you into deer and tear you apart with wild dogs if you ever see her. [The NDA must have been pretty tough on this chip.ed]
In fact things have been ticking along on this project for ages. ARM discloses that tapeout actually took place back in December last year and is expecting silicon to come back from the foundry in the following weeks.
ARM actually implemented a full four-core Artemis cluster on the test chip which should show vendors what is possible for their production designs. The test chip has a current generation Mali GPU implementation with 1 shader core to show vendors what they will get when they use ARM’s POP IP in conjunction with its GPU IP. There is also a range of other IP blocks and I/O interfaces that are used to validation of the new manufacturing process.
TSMC’s 10FF manufacturing process is supposed to increase density with scaling’s of up to 2.1x compared to the previous 16nm manufacturing node. It also brings about 11-12 per cent higher performance at each process’ respective nominal voltage, or a 30 per cent reduction in power.
ARM siad that comparing a current Cortex A72 design on 16FF+ and an Artemis core on 10FF on the new CPU and process can halve the dynamic power consumption. Currently clock frequencies on the new design are still behind the older more mature process and IP, but ARM expects this to improve as it optimizes its POP and the process stabilizes.
“Chrome PCs overall, including Chrome desktop units like the Chromebox, out-shipped all Apple personal computers, desktop plus notebook, in the U.S. for Q1,” said Jay Chou, one of several IDC analysts who track device shipments, in an email reply to questions.
Chromebooks, the inexpensive notebooks that run Chrome OS, also out-shipped Apple’s MacBook, MacBook Air and MacBook Pro notebooks in the U.S. The first-quarter battle wasn’t even close, according to the notebook-only shipment numbers Chou provided.
Apple shipped an estimated 1.17 million Mac notebooks in the U.S. during the first three months of 2016; IDC said 1.6 million Chrome OS notebooks shipped in the same span.
In other words, 37% more Chromebooks shipped than Mac notebooks.
Last week, Tom Warren of The Verge reported that Chrome OS hardware had out-shipped OS X-equipped Macs after speaking with one of Chou’s colleagues. Subsequently, numerous other outlets, including blogs and mainstream media websites, picked up Warren’s report.
IDC’s shipment data for Chrome OS and OS X systems were estimates generated using information from vendors and Asian component suppliers. Google, which developed Chrome OS, does not reveal shipment numbers: Most Chromebooks originate from third-party OEMs (original equipment manufacturers), including Acer, Asus, Dell, Hewlett-Packard and Lenovo. And although Apple disclosed global Mac sales in its April 26 earnings call with Wall Street, it did not break down that figure by geographic region.
That IDC’s numbers were estimates only was clear when comparing the research firm’s forecast to Apple’s stated sales for the first quarter. Prior to April 26 — when Apple said it had sold 4.03 million Macs worldwide – IDC had projected global Mac shipments at 4.47 million, or about 10% too high.
The changes will be aimed at enterprises, the only customer group Microsoft recommends running IE11 in the new operating system.
“We recognize that some enterprise customers have line-of-business applications built specifically for older web technologies, which require Internet Explorer 11,” the company said in a blog post.
Previously, Microsoft included “Enterprise Mode” in Windows 10, a feature that lets an IT staff limit IE11′s operation to specific legacy websites or web apps.
Starting with the Anniversary Update — Microsoft’s name for the one major upgrade it will deliver for 10 this year — the “interstitial” page, one that pops up between running Edge and IE11 when Enterprise Mode kicks in, will vanish.
Currently, a switch from Edge to IE11 opens a page that states, “This website needs Internet Explorer 11″ before IE11 fires up. With the Anniversary Update, the interstitial will no longer appear: IE11 will simply open atop Edge when the user steers to a site or app on the Enterprise Mode whitelist.
The same no-interstitial-page behavior will take place when a worker running IE11 types in an URL that is not on the list: Edge will open without a pause.
Microsoft will also introduce a new group policy for IE11 that will limit the browser’s use to only those sites on the whitelist, barring users from running IE11 for the bulk of their browsing. “Enabling this setting automatically opens all sites that are not included in the Enterprise Mode Site List in Microsoft Edge,” Microsoft said.
IE and Edge have a rapidly-shrinking share of the browser market, but the former will remain important to businesses with older apps and customized internal sites, which unless rewritten will require the older browser. Together, IE and Edge were run by 41.3% of the world’s users in April, a new low that dropped Microsoft into second place behind Google’s Chrome browser.
HelloTech will combine its network of about 150 college students who provide on-demand tech repair to Southern California consumers with Geekatoo’s U.S. network of about 5,000 technicians, the companies said in a joint statement.
The merger connects HelloTech with Geekatoo’s national market and provides Geekatoo with more access to venture capital funding, HelloTech co-founder Richard Wolpert said in an interview.
HelloTech, which launched about a year ago, has raised $17 million from investors, while 5-year-old Geekatoo has raised close to $3 million.
“You could either use capital to expand really quickly or you could merge with a company like Geekatoo that had already spent money doing this,” said Mark Suster, managing partner at Upfront Ventures, which backed HelloTech.
The new company keeps the HelloTech name and will be led by Wolpert. He said the deal was a stock transaction, rather than a cash payment, but declined to provide further details.
Both companies dispatch in-home tech support within hours of a request to fix a wonky printer, install a new TV or troubleshoot WiFi problems, among other services.
HelloTech hit a few bumps last year after launching, with some negative customer feedback that its workforce of predominantly college students was unprofessional.
Wolpert said the company has worked out the glitches. HelloTech has a five-star rating on customer review site Yelp.
Geekatoo Executive Chairman Christian Shelton saw demand for tech services rising as more people add internet-connected devices – such as the smart thermostat Nest or WiFi camera Dropcam – to their homes.
The U.S. tech support industry makes about $30 billion in annual revenue, according to research by Parks Associates, a consulting firm.
“The opportunity is massive,” Wolpert said.
The company’s main competition is Geek Squad, a tech support service founded in 1994 and owned by big-box retailer Best Buy.
HelloTech targets baby boomers with disposable income to spend on new gadgets and someone to help get them up and running.
“There is enormous wealth in the baby boomer generation,” Suster said, and their “digital lives are becoming increasingly complicated.”
The SWIFT network itself is still secure, it insisted in a letter to banks and financial institutions. However, some of its customers have suffered security breaches in their own infrastructure, allowing attackers to fraudulently authorize transactions and send them over the SWIFT network, it said.
That’s the best explanation so far for how authenticated instructions were sent from Bangladesh Bank to the U.S. Federal Reserve Bank of New York over the SWIFT network, ordering the transfer of almost $1 billion. The Fed transferred around $101 million of that before identifying an anomaly in one of the instructions. Only $20 million of that has so far been recovered.
“While customers are responsible for the security of their own environment, security is our top priority and as an industry-owned cooperative we are committed to helping our customers fight against cyber-attacks,” SWIFT said in the letter.
SWIFT wants its customers to come forward with information about other fraudulent transfers made using their SWIFT credentials, to help it build a picture of how the attackers are working.
It’s making more than a polite request: It reminded its customers that they have an obligation to provide such information under the terms of their contract, and also to help SWIFT identify, investigate and resolve problems, including by providing diagnostic information following an incident.
SWIFT promised its customers it would share new information about malware or other indicators of compromised systems. It said it would add such information to a restricted section of its website, tacking it onto knowledge base tip number 5020928, “Modus Operandi related to breaches in customer’s environment.”
It looks like that Qualcomm wants to make drones smarter and the company plans to use the Snapdragon developer board to do so. We had a chance to see the proof of concept drones that are capable of knowing and mapping environment.
Hugo Swart, Sr Director, Head IoE-consumer electronics at Qualcomm, has explained that the general direction in smart drone market at this time is the consumer electronic. Swart confirmed that the first drones powered by Qualcomm Snapdragon Flight drone platform technology should be commercially available very soon.
The company see drones as flying cameras, as most of sold drones have being used for video or aerial photography purpose. The drone we saw demonstrated at Qualcomm San Diego campus were powered by Snapdragon 410c developer board and this is one light device. The drone weights just bel 250 grams and it is made from composite materials. It packs a few cameras, four rotors and a Snapdragon 410 based developer board that makes the drone smart.
The actual weight is an important detail, as drones that are less than 250 grams do not have to be registered by the aviation authorities in the US. The demo showed a drone that used multiple camera to map the world around it, and it is aware of its surroundings.
The operator would use the tablet to fly the drone and the software had some nice features, like the use of the GPS to mark the position, and when necessary, the operator would just press the button and drone would find its way back to the marked position.
Since the drone would be using multiple cameras to map the world around it, it would be able to find a new path and avoid possible obstacles on its fly path. The demonstration we saw was done in a controlled environment with a huge rock in the middle of the environment, and the drone was avoiding the rock just as you would expect it.
The drone was able to detect a wall, and it would not let you fly in it and damage it. Drone would simply stop and would not crash and break no matter how hard you would try. The other nice feature was that the drone would be able to find its own way to the position market by GPS. It would not have to fly the path that you already flown, it would be able to find a shorter part to the mark position too.
Adding Snapdragon SoC on the drone would definitely make the flights safer and help you avoid damaging the drones or stuff around you. If you fly big drones for example with big cameras, you do not actually want to crash it and potentially destroy hundreds of dollars worth equipment.
Swart does believe that drones using Snapdragon Fly technology will first find its way in “flying camera drones” while later there might be a commercial applications with the Snapdragon Fly drones. Yes, at some point in the future, drones powered with this technology should be able to deliver packages. That is one of potential areas.
The only downside of this super lightweight drone was the fact that it had a small battery that would let it fly for six to eight minutes. Of course, if you make a larger drone with a larger battery, you would be able to fly it longer, but as we said this is a proof of concept designed to show the capabilities of this flying cameras. Qualcomm will have customers who will make the actual devices, the drone we saw in the demo room, was just to show the capabilities of the platform.
Partners will design its own drones and use the developer board (or integrated Snapdragon platform in an actual drone). The important part is the software who makes the synergy of the flying hardware and the visual compute in one Smart flying drone. If you are into drones, that this will definitely improve the overall experience.
The company was rumored to have been designing its own chip, based partly on job ads it posted in recent years. But until today it had kept the effort largely under wraps.
It calls the chip a Tensor Processing Unit, or TPU, named after the TensorFlow software it uses for its machine learning programs. In a blog post, Google engineer Norm Jouppi refers to it as an accelerator chip, which means it speeds up a specific task.
At its I/O conference Wednesday, CEO Sundar Pichai said the TPU provides an order of magnitude better performance per watt than existing chips for machine learning tasks. It’s not going to replace CPUs and GPUs but it can speed up machine learning processes without consuming a lot more more energy.
As machine learning becomes more widely used in all types of applications, from voice recognition to language translation and and data analytics, having a chip that speeds those workloads is essential to maintaining the pace of advancements.
The TPU is in production use across Google’s cloud, including powering the RankBrain search result sorting system and Google’s voice recognition services. When developers pay to use the Google Voice Recognition Service, they’re using its TPUs.
Urs Hölzle, Google’s senior vice president for technical infrastructure, said during a press conference at I/O that the TPU can augment machine learning processes but that there are still functions that require CPUs and GPUs.
Google started developing the TPU about two years ago, he said.
Right now, Google has thousands of the chips in use. They’re able to fit in the same slots used for hard drives in Google’s data center racks, which means the company can easily deploy more of them if it needs to.
British chip maker ARM has acquired Apical which is an imaging and embedded computer Vision Company in a $350 million cash deal.
Apical’s products are used in more than 1.5 billion smartphones and 300 million other devices, all over the world, including IP cameras, digital stills cameras and tablets.
Its products will be used in ARM’s ‘next generation vehicles’, security systems, robotics, mobile and other consumer, smart building, industrial or retail application. These devices will be able to ‘understand and act intelligently on information from their environment,’ the press release claims.
It also said Apical’s technology will complement the ARM Mali graphics, display and video processor roadmap.
ARM CEO Simon Segars said that the computer vision is in the early stages of development:
“The world of devices powered by this exciting technology can only grow from here. Apical is at the forefront of embedded computer vision technology, building on its leadership in imaging products that already enable intelligent devices to deliver amazing new user experiences. The ARM partnership is solving the technical challenges of next generation products such as driverless cars and sophisticated security systems. These solutions rely on the creation of dedicated image computing solutions and Apical’s technologies will play a crucial role in their delivery.”
There are three products being looked at: Spirit (computer-vision technology), Assertive Display (screens which adapt to changes in light) and Assertive Camera (new performance advances, including dynamic range, noise reduction and colour management).
The announcement was posted on a dark market website called TheRealDeal by a user who wants 5 bitcoins, or around $2,200, for the data set that supposedly contains user IDs, email addresses and SHA1 password hashes for 167,370,940 users.
According to the sale ad, the dump does not cover LinkedIn’s complete database. Indeed, LinkedIn claims on its website to have more than 433 million registered members.
Troy Hunt, the creator of Have I been pwned?, a website that lets users check if they were affected by known data breaches, said it’s highly likely for the leak to be legitimate. He had access to around 1 million records from the data set.
“I’ve seen a subset of the data and verified that it’s legit,” Hunt said.
LinkedIn suffered a data breach back in 2012, which resulted in 6.5 million user records and password hashes being posted online. It’s highly possible that the 2012 breach was actually larger than previously thought and that the rest of the stolen data is surfacing now.
LinkedIn did not immediately respond to a request for comment.
Attempts to contact the seller failed, but the administrators of LeakedSource, a data leak indexing website, claim to also have a copy of the data set and they believe that the records originate from the 2012 LinkedIn breach.
When the 6.5 million LinkedIn password hashes were leaked in 2012, hackers managed to crack over 60 percent of them. The same thing is likely true for the new 117 million hashes, so they cannot be considered safe.
Worse still, it’s very likely that many LinkedIn users that were affected by this leak haven’t changed their passwords since 2012. Hunt was able to verify that for at least one HIBP subscriber whose email address and password hash was in the new data set that is now up for sale.
Many people affected by this breach are also likely to have reused their passwords in multiple places on the Web, Hunt said via email.
Moving forward with his attempt to attract Indian customers and developers, Apple’s CEO Tim Cook announced that the company was setting up a new development center for its Maps product in Hyderabad in south India.
Apple earlier on Wednesday announced it would set up by early next year a facility in Bangalore to focus on helping developers on best practices and to improve the design, quality and performance of their apps on the iOS platform.
Cook is on his first visit to India, where the company saw a 56 percent year-on-year growth in iPhone sales in the first quarter even as its global iPhone sales and overall revenue dropped.
Apple’s new center will focus on the development of Maps for Apple products such as the iPhone, iPad, Mac and Apple Watch. The investment will accelerate Maps development and create up to 4,000 jobs, the company said.
The Cupertino, California, company did not disclose the size of its investment in the center though some reports have placed the figure at $25 million.
A large number of U.S. companies, including Texas Instruments, Oracle, Microsoft and IBM, have set up software, chip design and product development centers in India, to tap the country’s large pool of engineers.
“The talent here in the local area is incredible and we are looking forward to expanding our relationships and introducing more universities and partners to our platforms as we scale our operations,” Cook said in a statement.
India is the third-largest smartphone market in the world, after China and the U.S., according to Gartner research director Anshul Gupta.
The move will open up new opportunities for designers of autonomous vehicles and security systems, among other connected things, according to ARM CEO Simon Segars. Computer vision is in its early stages, and Apical is at the forefront of embedding such technology, he said.
Apical’s technologies is already used in 1.5 billion smartphones, according to ARM, although many of those phones may be using nothing more sophisticated than a display brightness control Apical calls Assertive Display. That technology also turned up in Samsung Electronics’ new laptop, the ATIV Book 9.
Assertive Camera is another of Apical’s developments: It’s a range of software packages and silicon-based image signal processors for reducing image noise, managing color and shooting high dynamic range images.
ARM makes its money by designing chips that others manufacture, or licensing its chip modules for others to incorporate in their own designs.
In that context, Apical’s Spirit silicon building blocks are perhaps where ARM sees the most opportunity for growth. The Spirit silicon blocks process raw sensor data or video into a machine-readable representation of an image in an energy-efficient way, so ARM and its partners can use them to add computer vision capabilities to future low-power devices.
Putting image analysis and interpretation capabilities in hardware could accelerate and simplify the design of a whole host of products, including self-driving cars and security systems.
ARM paid US$350 million for Apical, closing the deal Tuesday, it said.
Nokia has demonstrated the feasibility of 10Gbps symmetrical data speeds over traditional hybrid fibre-coaxial (HFC) cable networks, such as those operated by Virgin Media in the UK.
Trumping BT’s 5Gbps XG.fast trials, Nokia’s prototype technology, called XG-Cable, is still at the proof-of-concept stage, but should easily integrate into the DOCSIS 3.1 suite of specifications focused on providing cable operators with technology innovations to transform the industry.
DOCSIS is the set of standards governing data access over cable TV networks, and DOCSIS 3.1 was designed to enable capacities of 10Gbps downstream, but only 1Gbps upstream. Nokia has taken this a step further by demonstrating that symmetrical speeds of 10Gbps are possible.
The technology is still at an early stage of development and no in-service date has been even floated by Nokia, but the test by Nokia Bell Labs has apparently demonstrated that the technology is viable using existing HFC cable networks, where fibre-optic cable is used to connect to cabinets on the street and coaxial copper cable lines are used for last-leg distribution to the customer premises.
XG-Cable means that cable operators will at some point in the future be able to use existing HFC cables in the last 200 meters to provide upstream speeds never before achievable owing to the limited spectrum available, according to Nokia.
This will enable the provision of ultra-fast broadband services to consumer locations that were not physically or economically viable unless fiber was brought all the way to the premises.
“The XG-Cable proof-of-concept is a great example of our ongoing effort and commitment to provide the cable industry with the latest innovations and technology needed to effectively address the growing demand for gigabit services,” said Federico Guillén, president of fixed networks at Nokia.
“The proof-of-concept demonstrates that providing 10Gbps symmetrical services over HFC networks is a real possibility for operators. It is an important achievement that will define the future capabilities and ultra-broadband services cable providers are able to deliver.”
Twitter Inc users will soon have more flexibility in posting tweets because the company plans to discontinue including photos and links as part of its 140-character limit, according to a Bloomberg report.
The social media platform has faced stagnant user growth. Months earlier, Twitter Chief Executive Jack Dorsey said the company would simplify its product in an effort to attract new users.
“We think there’s a lot of opportunity in our product to fix some broken windows that we know are inhibiting growth,” Dorsey said during a February earnings call.
Links currently take up to 23 characters of a tweet, limiting the amount of commentary that users can offer when sharing articles or other content.
Twitter has faced stagnant user growth, and shares have fallen more than 70 percent over the past year.
Twitter declined to comment on the report.