IEEE University of Lahore

July 27th, 2019

Adafruit’s Limor Fried ported TensorFlow Light to the Arduino ecosystem so you can make your own AI-powered projects

I want to find out what happens when we bring machine learning to cheap, robust devices that can have all kinds of sensors and work in all kinds of environments. And I want you to help. The kind of AI we can squeeze into a US $30 or $40 system won’t beat anyone at Go, but it opens the door to applications we might never even imagine otherwise.

Specifically, I want to bring machine learning to the Arduino ecosystem. This has become recently possible thanks to improvements in hardware and software.

On the hardware side, Moore’s Law may be running out of steam when it comes to cutting-edge processors, but the party’s not over when it comes to microcontrollers. Microcontrollers based on 8-bit AVR processors dominated the Arduino ecosystem’s early years, for example, but in more recent years, embedded-chip makers have moved toward more powerful ARM-based chips. We can now put enough processing power into these cheap, robust devices to rival desktop PCs of the mid 1990s.

On the software side, a big step has been the release of Google’s TensorFlow Lite, a framework for running pretrained neural networks—also known as models—on so-called edge devices. Last April, IEEE Spectrum’s Hands On column looked at Google’s Coral Dev Board, a single-board computer that’s based on the Raspberry Pi form factor, designed to run TensorFlow Lite models. The Coral incorporates a dedicated tensor processing unit and is powerful enough to process a live video feed and recognize hundreds of objects. Unfortunately for my plan, it costs $150 and requires a hefty power supply, and its bulky heat sink and fan limit how it can be packaged.

But fortunately for my plan, Pete Warden and his team have done amazing work in bringing TensorFlow Lite to chips based on ARM’s Cortex family of processors. This was great to discover, because at my open-source hardware company, Adafruit Industries, our current favorite processor is the 32-bit SAMD51, which incorporates a Cortex-M4 CPU. We’ve used the SAMD51 as the basis for many of our recent and upcoming Arduino-compatible boards, including the PyGamer, a simple battery-powered gaming handheld. What if we could use it to literally put AI into people’s hands?

Warden had created a speech-recognition model that can identify the words “yes” and “no” in an analog audio feed. I set about seeing if I could bring this to the PyGamer, and what I might do with a model that could recognize only two words. I wanted to create a project that would spark the imagination of makers and encourage them to start exploring machine learning on this kind of hardware.

I decided to make it as playful as possible. The more playful something is, the more forgivable its mistakes—there’s a reason why Sony’s artificial pet Aibo was given the form of a puppy, because real puppies are clumsy and sometimes run into walls, or don’t follow instructions.

I recalled the original Tron movie, where the hero is stuck in cyberspace and picks up a sidekick of sorts, a single bit that can say only “yes” or “no,” with an accompanying change of shape. The PyGamer has a 1.8-inch color display, with 192 kilobytes of RAM and 8 megabytes of flash file storage, enough to display snippets of video from Tron showing the bit’s “yes” and “no” responses. The PyGamer’s SAMD51 processor normally runs at 120 megahertz, which I overclocked to 200 MHz for a performance boost. I connected an electret microphone breakout board to one of the PyGamer’s three JST ports.

Then I turned to the trickiest task: porting the TensorFlow Lite ARM code written by Warden and company into a library that any Arduino programmer can use (although not for every Arduino board! Even as a “lite” framework, the RAM requirements are far beyond the 2 kB of the Arduino Uno, for example).

I found the source code well written, so the biggest challenge became understanding how it handles incoming data. Data is not digested as a simple linear stream but in overlapping chunks. I also wanted to expose the code’s capabilities in a way that would be familiar to Arduino programmers and wouldn’t overwhelm them. I identified the functions most likely to be useful to programmers, so that data could be easily fed into a model from a sensor, such as a microphone, and the results outputted to the rest of the programmer’s code for them to handle as they wish. I then created an Arduino library incorporating these functions, which you can find on Adafruit’s Github repository.

Putting it all together, I wrote a short program using the new library. Now when I press a button and speak into the PyGamer’s attached microphone, the appropriate Tron clip is triggered if I say “yes” or “no,” letting me walk around with my own animated sidekick.

Although this project is a limited (but fun) intro to machine learning, I hope it persuades more makers and engineers to combine AI with hardware explorations. Our next steps at Adafruit will be to make it easier to install different models and create new ones. With 200 kB of RAM, you could have a model capable of recognizing 10 to 20 words. But even more exciting than voice recognition is the prospect of using these cheap boards to gather data and run models built around very different kinds of signals. Can we use data from the PyGamer’s onboard accelerometer to learn how to distinguish the user’s movements in doing different tasks? Could we pool data and train a model to, say, recognize the sound of a failing servo or a switching power supply? What surprises could lie in store? The only way to find out is to try.

This article appears in the August 2019 print issue as “Making Machine Learning Arduino Compatible.”

Editor’s note: Limor Fried is a member of IEEE Spectrum’s editorial board.

July 27th, 2019

Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada

Let us know if you have suggestions for next week, and enjoy today’s videos.

July 26th, 2019

Inventions can be classified, like animals and plants, into kingdoms, phyla, and species

Our count of living species remains incomplete. In the 250-plus years since Carl Linnaeus set up the modern taxonomic system, we have classified about 1.25 million species, about three-quarters of them animals. Another 17 percent are plants, and the remainder are fungi and microbes. And that’s the official count—the still-unrecognized number of species could be several times higher still.

The diversity of man-made objects is easily as rich. Although my comparisons involve not just those proverbial apples and oranges but apples and automobiles, they still reveal what we have wrought.

I begin constructing my parallel taxonomy with the domain of all man-made objects. This domain is equivalent to the Eukarya, all organisms having nuclei in their cells. It contains the kingdom of complex, multicomponent designs, equivalent to Animalia. Within that kingdom we have the phylum of designs powered by electricity, equivalent to the Chordata, creatures with a dorsal nerve cord. Within that phylum is a major class of portable designs, equivalent to Mammalia. Within that class is the order of communications artifacts, equivalent to Cetacea, the class of whales, dolphins, and porpoises, and it contains the family of phones, equivalent to Delphinidae, the oceanic dolphins.

Families contain genera, such as Delphinus (common dolphin), Orcinus (orcas), and Tursiops (bottlenose dolphins) in the ocean. And, according to GSM Arena, which monitors the industry, in early 2019 there were more than 110 mobile-phone genera (brands). Some genera contain a single specific species—for instance, Orcinus contains only Orcinus orca, the killer whale. Other genera are species-rich. In the mobile-phone realm none is richer than Samsung, which now includes nearly 1,200 devices. It is followed by LG, with more than 600, and Motorola and Nokia, each with nearly 500 designs. Altogether, in early 2019 there were some 9,500 different mobile “species”—and that total is considerably larger than the known diversity of mammals (fewer than 5,500 species).

Even if we were to concede that mobile phones are just varieties of a single species (like the Bengal, Siberian, and Sumatran tigers), there are many other numbers that illustrate how species-rich our designs are. The World Steel Association lists about 3,500 grades of steel, more than all known species of rodents. Screws are another supercategory: Add up the combinations based on screw materials (aluminum to titanium), screw types (from cap to drywall, from machine to sheet metal), screw heads (from washer-faced to countersunk), screw drives (from slot to hex, from Phillips to Robertson), screw shanks and tips (from die point to cone point), and screw dimensions (in metric and other units), and you end up with many millions of possible screw “species.”

Taking a different tack, we have also surpassed nature in the range of mass. The smallest land mammal, the Etruscan shrew, weighs as little as 1.3 grams, whereas the largest, the African elephant, averages about 5 metric tons. That’s a range of six orders of magnitude. Mass-produced mobile-phone vibrator motors match the shrew’s weight, while the largest centrifugal compressors driven by electric motors weigh around 50 metric tons, for a range of seven orders of magnitude.

The smallest bird, the bee hummingbird, weighs about 2 grams, whereas the largest flying bird, the Andean condor, can reach 15 kilograms, for a range of nearly four orders of magnitude. Today’s miniature drones weigh as little as 5 grams versus a fully loaded Airbus 380, which weighs 570 metric tons—a difference of eight orders of magnitude.

And our designs have a key functional advantage: They can work and survive pretty much on their own, unlike our bodies (and those of all animals), which depend on a well-functioning microbiome: There are at least as many bacterial cells in our guts as there are cells in our organs. That’s life for you.

This article appears in the August 2019 print issue as “Animals vs. Artifacts: Which are more diverse?”

July 26th, 2019

Choosing the right current sensor does not need to be guesswork

Description: Download this application note to learn selection parameters for selecting current sensors, as well as the limitations of alternative technologies such as current sense resistors.

July 25th, 2019

Scientists in Switzerland have demonstrated a technology that can produce kerosene and methanol from solar energy and air

Scientists have searched for a sustainable aviation fuel for decades. Now, with emissions from air traffic increasing faster than carbon-offset technologies can mitigate them, environmentalists worry that even with new fuel-efficient technologies and operations, emissions from the aviation sector could double by 2050.

But what if, by 2050, all fossil-derived jet fuel could be replaced by a carbon-neutral one made from sunlight and air?

In June, researchers at the Swiss Federal Institute of Technology (ETH) in Zurich demonstrated a new technology that creates liquid hydrocarbon fuels from thin air—literally. A solar mini-refinery—in this case, installed on the roof of ETH’s Machine Laboratory—concentrates sunlight to create a high-temperature (1,500 degrees C) environment inside the solar thermochemical reactor.

July 25th, 2019

Over time, we will design physical spaces to accommodate robots and augmented reality

Every time I’m in a car in Europe and bumping along a narrow, cobblestone street, I am reminded that our physical buildings and infrastructure don’t always keep up with our technology. Whether we’re talking about cobblestone roads or the lack of Ethernet cables in the walls of old buildings, much of our established architecture stays the same while technology moves forward.

But embracing augmented reality, autonomous vehicles, and robots gives us new incentives to redevelop our physical environments. To really get the best experience from these technologies, we’ll have to create what Carla Diana, an established industrial designer and author, calls the “robot-readable world.”

Diana works with several businesses that make connected devices and robots. One such company is Diligent Robotics, of Austin, Texas, which is building Moxi, a one-handed robot designed for hospitals. Moxi will help nurses and orderlies by taking on routine tasks, such as fetching supplies and lab results, that don’t require patient interaction. However, many hospitals weren’t designed with rolling robots with pinchers for hands in mind.

Moxi can’t open every kind of door or use the stairs, so its usefulness is limited in the average hospital. For now, Diligent sends a human helper for Moxi during test runs. But the company’s thinking is that if hospitals see the value in an assistive robot, they might change their door handles and organize supplies around ramps, not stairs. The bonus is that these changes would make hospitals more accessible to the elderly and those with disabilities.

This design philosophy doesn’t have to be limited to the hospital, however. Autonomous cars will likely need road signs that are different from the ones we’ve grown accustomed to. Current road signs are easily read by humans, but they could be vandalized so as to trick autonomous vehicles into interpreting them incorrectly. Delivery drones will need markers to navigate as well as places to land, if Amazon wants to get serious about delivering packages this way.

Google has already developed one solution. Back in 2014, the company invented plus codes. These are short codes for places that don’t traditionally have street names and numbers, such as a residence in a São Paulo favela or a point along an oil pipeline. These codes are readable by humans and machines, thus making the world a little more bot friendly.

Augmented reality (AR) also stands to benefit from this new design philosophy. Mark Rolston is the founder and chief creative officer of ArgoDesign, a company that helps tech companies design their products. Rolston has found that bringing AR—such as Magic Leap’s head-mounted virtual retinal display—into offices and homes can be tough, depending on the environment. For example, the Magic Leap reads glass walls as blank space, which results in AR images that are too faint to show up on the surface.

AR also struggles with white or dark walls. Rolston says the ideal wall is painted a light gray and has curved edges rather than sharp corners. While he doesn’t expect every room in an office or home to follow these guidelines, he does think we’ll start seeing a shift in design to accommodate AR needs.

In other words, we’ll still see the occasional cobblestone street and white wall, but more and more we’ll see our physical structures accommodate our tech-focused society.

July 24th, 2019

Toyota’s T-TR1 offers a way for people to attend the Olympics without leaving home

With the Olympics taking place next year in Japan, Toyota is (among other things) stepping up its robotics game to help provide “mobility for all.” We know that Toyota’s HSR will be doing work there, along with a few other mobile systems, but the Toyota Research Institute (TRI) has just announced a new telepresence robot called the T-TR1, featuring an absolutely massive screen designed to give you a near-lifesize virtual presence.

July 24th, 2019

The end of Moore’s Law will drive a renaissance in chip innovation, CEOs say. But the semiconductor industry must face the existential question of power consumption driving climate change

We have entered a “Renaissance of Silicon.” That was the thesis of a panel that brought together semiconductor industry CEOs at Micron Technology’s San Jose campus last week. This renaissance, the executives indicated, will lead to an exciting—but not predictable—innovation in chip technology driven by applications that demand more computing power and by the demise of Moore’s Law.

“I’ve never seen a more exciting time in my 40 years in the industry,” said Sanjay Mehrotra, CEO of Micron Technology.

“I hadn’t heard semiconductor and Renaissance in the same sentence in 20 years,” said Tammy Kiely, Goldman Sachs global head of semiconductor investment banking. Kiely moderated the panel, which was organized by the Churchill Club.

The driving force behind this renaissance is “burning necessity,” said Xilinx CEO Victor Peng. Arm CEO Simon Segars agreed.

“For last 15 years, the driver of growth was mobile,” Segars said. “Over the last five years, the industry was in a bit of a lull. Then all of a sudden there is this combination of effects.” He listed cloud computing, handheld computing, IoT devices, 5G, AI, and autonomous vehicles as contributing to the boom. “Lots of things are coming together at once,” he said, along with “fundamental algorithm development.”

All these things, Xilinx’s Peng said, mean that the industry will have to come up with a way to improve computing power and storage by a factor of 100—if not 1000—over the next 10 years. That will require new architectures, new packaging—and a new way of looking at the entire ecosystem. “The entire data center is a computer,” he said, pointing out that computation will have to happen all over it, in memory, in switches, even in the communications lines.

Getting a 100- to 1000-times improvement in processing power will also require innovation in software, Peng continued. “People got used to Moore’s law enabling them to throw cycles away to enable abstraction. It’s not that simple anymore…. When you need 100 times [improvement], you don’t evolve the same architecture, you start all over. When Moore’s law was chipping away every year, you didn’t rethink the entire problem. But now you have to look at hardware and software together.”

Concerned About Climate Change

The panelists reminded the audience that it’s also no longer just about making chips better, faster, and cheaper (or these days, as Peng points out, getting one or two of those things at best). The semiconductor industry also has to drive power consumption down.

“Power consumption is an existential question, [considering] climate change,” Peng said, noting that data centers now consume about 10 percent of the world’s electric power. “That cannot scale exponentially and be sustainable.” Getting power consumption down is, he said, “not only a huge business opportunity but a moral imperative.”

Climate change, Segars said, will be a big driver for semiconductor innovation over the next five years. The industry, he said, will have to “create different computing engines to solve things in a more efficient way… [and innovate] on microarchitectures to get power down. In the long term, [we have to] think about workloads. The ultimate architecture might be dedicated engines for different commonly executed tasks that do things in a more efficient way.”

Segars also suggested that we ought to consider the bigger picture when weighing the power costs of computing. “Smart cities,” he said, “may result in more energy getting burned in data centers to process the IoT data, but their net could be energy savings.”

Don’t Expect a Startup Surge

This boom in innovation is unlikely to lead to a boom in startups, the panelists agreed. That may be counter to what we expect from Silicon Valley, but it’s the reality, they indicated.

“The cost of taking a chip out at 5 nanometers is astronomical, so unless you can amortize costs over multiple designs, nobody can afford it,” Segars said. “So we will need the large semiconductor companies to keep progress aggressive. Of course, the more expensive it is, the fewer that can afford to do it. But unlike other industries, like steel, I don’t think innovation is going to dry up.”

“Larger companies have a greater ability to invest in future innovations, to make big bets,” Mehrotra agreed. However, he said, “there are startups that are forming that are working on silicon aspects. Certainly what Simon [Segars] said about increased complexity and the time and money [involved] compared to the past is true.” But, he said, at least some architecture innovation is happening outside the big companies—though “not the way it was when I joined the industry 40 years ago.”

“There has been a lot of VC money going into AI chip companies,” Segars concurred. But, he predicts that, “unfortunately I don’t think we are going back to the days where Sand Hill Road is going to hand out wheelbarrows of money to people to design chips.”

July 23rd, 2019

The silicon-controlled rectifier, or thyristor, can be found in flash bulbs, motors and manufacturing equipment

.entry-content .tisubhead {
color: #999999;
font-family: verdana;
font-size: 14px;
font-weight: bold;
letter-spacing: 1px;
margin-bottom: -5px !important;
text-transform: uppercase;
.tiopener {
color: #0f4994;
font-family: theinhardt;
letter-spacing: 1px;
margin-right: 15px;
font-weight: bold;
text-transform: uppercase;

THE INSTITUTEMore than 60 years after General Electric introduced the silicon-controlled rectifier, it is still a dominant control device in the power industry because of its efficiency. The SCR, also known as the thyristor, is a three-terminal p-n-p-n device that has an anode, a cathode, and a gate. It was introduced in 1957 and developed at a GE facility in Clyde, N.Y.

The invention of the SCR led to improvements in the control of the rectification, or conversion, of line voltage from AC to DC and became the basis of modern speed control in both AC and DC motors. The device’s application to motor control made possible the displacement of DC motors by the more efficient and reliable AC motors, particularly in trains, according to the Engineering and Technology History Wiki. SCRs also allowed for DC electrical transmission at much higher voltages and power levels than previously obtainable, says IEEE Life Senior Member Sreeram Dhurjaty, chair of the IEEE Power Electronics Society’s Rochester Section chapter.

The SCR was dedicated as an IEEE Milestone on 14 June. Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world. The Rochester Section’s Power Electronics Society chapter was the sponsor.


Prior to 1955, triode vacuum tubes, which control the flow of electric current between electrodes, were used for machine control. They were difficult to operate and frequently failed in large machines.

To fix those issues, IEEE Member John Bardeen and Walter H. Brattain of Bell Laboratories in 1947 developed the point-contact transistor, an interconnection between two transistors.

According to IEEE Life Fellow Edward Owen, who wrote the article “SCR Is 50 Years Old” about the technology in the IEEE Industry Applications Magazine, the complexity of the point-contact transistor’s circuits and the fragile nature of the technology spurred GE power engineers Frank Gutzwiller and Gordon Hall to develop a new technology in 1956 that would improve upon Bardeen and Brattain’s device. But, as IEEE Life Fellow Gerard Hurley, history chair of the IEEE Power Electronics Society, explained during the Milestone ceremony, the two engineers encountered several issues.

Hurley said Gutzwiller and Hall did not realize until later in their research that silicon, not germanium, was the appropriate semiconductor material to use for the SCR. Germanium has a smaller band gap—which means less energy is required to pull electrons into conduction. That makes it easier for the material to heat up and degrade.

Gutzwiller and Hall also encountered problems with false triggering. Heat alone could cause the device to turn on. The device also could be triggered by induced current, when the anode to cathode voltages rose too fast. Both instances could cause leakage, which could increase power consumption or result in complete circuit failure.

The first SCRs that Gutzwiller and Hall built could tolerate only low voltages, but refinements to the manufacturing process ultimately produced devices capable of handling voltages exceeding 10 kilovolts. Gutzwiller and Hall also designed a silicon-wafer bonding process capable of better accommodating thermally induced stresses.

Modern SCRs are used for AC power control for lights and motors, AC power switching circuits, and photographic flashes.

The SCR also made an impact on manufacturing, according to IEEE Life Fellow John Kassakian, founding president of the IEEE Power Electronics Society.

“The steel, electrochemical, automotive, and welding industries, among many others, benefited greatly by the improved efficiency, more precise control, and reduced cost made possible by the application of SCR-based equipment to their processes,” Kassakian said at the Milestone ceremony.

A plaque honoring the SCR was mounted at the entrance of the Advanced Atomization Technologies headquarters, in Clyde. AAT is a joint venture between GE Aviation and Parker Aerospace.

The plaque reads:

General Electric introduced the silicon-controlled rectifier (SCR), a three-terminal p-n-p-n device, in 1957. The gas-filled tubes used previously were difficult to operate and unreliable. The symmetrical alternating-current switch (TRIAC), the gate turn-off thyristor (GTO), and the large integrated gate-commutated thyristor (IGCT) evolved from the SCR. Its development revolutionized efficient control of electric energy and electrical machines.

This article was written with assistance from the IEEE History Center, which is funded by donations to the IEEE Foundation’s Realize the Full Potential of IEEE campaign.

July 23rd, 2019

This drone can dynamically fold and unfold its arms to pass through narrow gaps

Late last year, we wrote about a foldable drone from Davide Scaramuzza’s lab at the University of Zurich that could change its shape in mid-air to squeeze through narrow gaps. That drone used servos to achieve a variety of different configurations, which made it very flexible but also imposed a penalty in complexity and weight. At ICRA in Montreal earlier this year, researchers from UC Berkeley demonstrated a new design for a foldable drone, able to shrink itself by 50 percent in less than half a second thanks to spring-loaded arms controlled by the power of the drone’s own propellers.