Engineering Community Portal

MERLOT Engineering
Share

Welcome  – From the Editor

Welcome to the Engineering Portal on MERLOT. Here, you will find lots of resources on a wide variety of topics ranging from aerospace engineering to petroleum engineering to help you with your teaching and research.

As you scroll this page, you will find many Engineering resources.  This includes the most recently added Engineering material and members; journals and publications and Engineering education alerts and twitter feeds.  

Showcase

Over 150 emeddable or downloadable 3D Simulations in the subject area of Automation, Electro/Mechanical, Process Control, and Renewable Energy.  Short 3-7 minute simulations that cover a range of engineering topics to help students understand conceptual engineering topics. 

Each video is hosted on Vimeo and can be played, embedded, or downloaded for use in the classroom or online.  Other option includes an embeddable HTML player created in Storyline with review questions for each simulation that reinforce the concepts learned. 

Made possible under a Department of Labor grant.  Extensive storyboard/ scripting work with instructors and industry experts to ensure content is accurate and up to date. 

Engineering Technology 3D Simulations in MERLOT

New Materials

New Members

Engineering on the Web

  • DEVA and GP's 'alliance' engineering in Turkish politics | Column - Daily Sabah
    Jan 16, 2022 01:07 PM PST
  • Google Maps and the Technology Behind the Popular Navigation App - Interesting Engineering
    Jan 16, 2022 12:41 PM PST
  • Astronomers Create Largest Ever 3D Map of the Cosmos - Interesting Engineering
    Jan 16, 2022 12:14 PM PST
  • Minneapolis' Niron Magnetics still small, but intends to be big player in greener economy
    Jan 16, 2022 12:09 PM PST
  • Boys and Girls Clubs of Carson gets first look at Coliseum track | NASCAR
    Jan 16, 2022 11:03 AM PST
  • Marin high schools align with college for career paths
    Jan 16, 2022 10:48 AM PST
  • Caste re-engineering decides UP poll result - Daily Pioneer
    Jan 16, 2022 10:33 AM PST
  • The core stage engineering testing for the Artemis I SLS rocket has been completed. - Brinkwire
    Jan 16, 2022 09:12 AM PST
  • Resilience is what makes microgrids attractive as back-up energy controls | Building Design ...
    Jan 16, 2022 08:47 AM PST
  • New name and outlook for Sarnia engineering firm | St. Thomas Times-Journal
    Jan 16, 2022 08:33 AM PST
  • James Randolph "Randy" Lane Obituary - Poughkeepsie Journal
    Jan 16, 2022 08:27 AM PST
  • SDDOT Secretary Joel Jundt named to National Transportation Research Board | DRGNews
    Jan 16, 2022 08:26 AM PST
  • From PhD To Engineering: These 9 Indian Cricketers Are Highly Educated You Probably Didn't Know
    Jan 16, 2022 08:15 AM PST
  • The Biggest Danger of AI Isn't Skynet — It's Human Bias That Should Scare You
    Jan 16, 2022 08:11 AM PST
  • 69% M.Tech seats lie vacant as interest declines - Hindustan Times
    Jan 16, 2022 08:11 AM PST
  • Scholarship created in memory of UD student killed at Astroworld - Springfield News-Sun
    Jan 16, 2022 08:01 AM PST
  • CCFR Promotes Community Risk Reduction Week 2022 | Manning Live
    Jan 16, 2022 07:40 AM PST
  • Minister provides scholarships for 53 engineering, eight medical students | News Ghana
    Jan 16, 2022 06:36 AM PST
  • Robert Cattoi Obituary (2022) - Cedar Rapids, IA - The Gazette - Legacy.com
    Jan 16, 2022 05:25 AM PST
  • Physics-Based Engineering and the Machine-Learning “Black Box” Problem - California News Times
    Jan 16, 2022 05:22 AM PST
  • The Lies that Powered the Invention of Pong
    Jan 15, 2022 08:00 AM PST
    In 1971 video games were played in computer science laboratories when the professors were not looking—and in very few other places. In 1973 millions of people in the United States and millions of others around the world had seen at least one video game in action. That game was Pong. Two electrical engineers were responsible for putting this game in the hands of the public—Nolan Bushnell and Allan Alcorn, both of whom, with Ted Dabney, started Atari Inc. in Sunnyvale, Calif. Mr. Bushnell told Mr. Alcorn that Atari had a contract from General Electric Co. to design a consumer product. Mr. Bushnell suggested a Ping-Pong game with a ball, two paddles, and a score, that could be played on a television. “There was no big contract,” Mr. Alcorn said recently. “Nolan just wanted to motivate me to do a good job. It was really a design exercise; he was giving me the simplest game he could think of to get me to play with the technology.” The key piece of technology he had to toy with, he explained, was a motion circuit designed by Mr. Bushnell a year earlier as an employee of Nutting Associates. Mr. Bushnell first used the circuit in an arcade game called Computer Space, which he produced after forming Atari. It sold 2000 units but was never a hit. This article was first published as "Pong: an exercise that started an industry." It appeared in the December 1982 issue of IEEE Spectrum as part of a special report, “Video games: The electronic big bang.” A PDF version is available on IEEE Xplore. The key piece of technology he had to toy with, he explained, was a motion circuit designed by Mr. Bushnell a year earlier as an employee of Nutting Associates. Mr. Bushnell first used the circuit in an arcade game called Computer Space, which he produced after forming Atari. It sold 2000 units but was never a hit. In the 1960s Mr. Bushnell had worked at an amusement park and had also played space games on a PDP-10 at college. He divided the cost of a computer by the amount of money an average arcade game made and promptly dropped the idea, because the economics did not make sense. Then in 1971 he saw a Data General computer advertised for $5000 and determined that a computer game played on six terminals hooked up to that computer could be profitable. He began designing a space game to run on such a timeshared system, but because game action occurs in real time, the computer was too slow. Mr. Bushnell began trying to take the load off the central computer by making the terminals smarter, adding a sync generator in each, then circuits to display a star field, until the computer did nothing but keep track of where the player was. Then, Mr. Bushnell said, he realized he did not need the central computer at all—the terminals could stand alone. “He actually had the order for the computers completed, but his wife forgot to mail it,” Mr. Alcorn said, adding, “We would have been bankrupt if she had.” Mr. Bushnell said, “The economics were not longer a $6000 computer plus all the hardware in the monitors; they became a $400 computer hooked up to a $100 monitor and put in a $100 cabinet. The ice water thawed in my veins.” The ball in Pong is square. Considering the amount of circuitry a round ball would require, “who is going to pay an extra quarter for a round ball?” Computer Space appealed only to sophisticated game players—those who were familiar with space games on mainframe computers, or those who frequent the arcades today. It was well before its time. Pong, on the other hand, was too simple for an EE like Mr. Bushnell to consider designing it as a real game—and that is why it was a success. Mr. Bushnell had developed the motion circuit in his attempt to make the Computer Space terminals smarter, but Mr. Alcorn could not read his schematics and had to redesign it. Mr. Alcorn was trying to get the price down into the range of an average consumer product, which took a lot of ingenuity and some tradeoffs. “There was no real bulk memory available in 1972,” he said. “We were faced with having a ball move into any of the spots in a 200-by-200 array without being able to store a move. We did it with about 10 off-the-shelf TTL parts by making sync generators that were set one or two lines per frame off register.” Thus, the ball would move in relation to the screen, both vertically and horizontally, just as a misadjusted television picture may roll. Mr. Alcorn recalled that he originally used a chip from Fairchild to generate the display for the score, but it cost $5, and he could do the same thing for $3 using TTL parts, though the score was cruder. The ball in Pong is square—another tradeoff. Considering the amount of circuitry a round ball would require, Mr. Alcorn asked, “who is going to pay an extra quarter for a round ball?” Sound was also a point of contention at Atari. Mr. Bushnell wanted the roar of approval of a crowd of thousands; Mr. Dabney wanted the crowd booing. “How do you do that with digital stuff?” Mr. Alcorn asked. “I told them I didn’t have enough parts to do that, so I just poked around inside the vertical sync generator for the appropriate tones and made the cheapest sound possible.” The hardware design of Pong took three months, and Mr. Alcorn’s finished prototype had 73 ICs, which, at 50 cents a chip, added up to $30 to $40 worth of parts. “That’s a long way from a consumer product, not including the package, and I was depressed, but Noland said ‘Yeah, well, not bad.’” They set the Pong 2 prototype up in a bar and got a call the next day to take it out because it was not working. When they arrived, the problem was obvious: the coin box was jammed full of quarters.
  • Video Friday: Guitar Bot
    Jan 14, 2022 09:00 AM PST
    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): ICRA 2022: 23–27 May 2022, Philadelphia ERF 2022: 28–30 June 2022, Rotterdam, Germany CLAWAR 2022: 12–14 September 2022, Açores, Portugal Let us know if you have suggestions for next week, and enjoy today's videos. Robotics. It's a wicked game. [ GA Tech ] This experiment demonstrated the latest progress of the flying humanoid robot Jet-HR2. The new control strategy allows the robot to hover with position feedback from the motion-capture system. Video demonstrates the robot's ability to remain stable hovering in midair for more than 20 seconds. [ YouTube ] Thanks, Zhifeng! This super cool soft robotic finger from TU Berlin is able to read Braille with astonishing accuracy by using sound as a sensor. [ TU Berlin ] Cassie Blue navigates around furniture used as obstacles in the Ford Robotics Building at the University of Michigan. All the clips in this video are magnified 1x on purpose to show Cassie's motion. [ Michigan Robotics ] Thanks, Bruce! Tapomayukh Bhattacharjee received a National Science Foundation (NSF) National Robotics Initiative (NBI) collaborative grant for a project that aims to address—and ameliorate—the way people with mobility issues are given a chance for improved control and independence over their environments, especially in how they are fed—or better, how they can feed themselves with robotic assistance. [ Cornell ] A novel quadcopter capable of changing shape midflight is presented, allowing for operation in four configurations with the capability of sustained hover in three. [ HiPeR Lab ] Two EPFL research groups teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own. [ EPFL ] The MRV is SpaceLogistics’ next-generation on-orbit servicing vehicle, incorporating a robotic arm payload developed and integrated by the U.S. Naval Research Laboratory and provided by the U.S. Defense Advanced Research Projects Agency. In this test of Flight Robotic Arm System 1, the robotic arm is executing an exercise called the Gauntlet, which moves the arm through a series of poses that exercise the full motion of all seven degrees of freedom. [ Northrop Grumman ] You almost certainly can't afford it, but the Shadow Robot Co. would like to remind you that the Shadow Hand is for sale. [ Shadow ] Join ESA astronaut Matthias Maurer inside Kibo, the Japanese laboratory module of the International Space Station in 360°, setting up Astrobee free-flying robots for the ReSWARM (RElative Satellite sWArming and Robotic Maneuvering) experiment. This robotics demonstration tests autonomous microgravity motion planning and control for on-orbit assembly and coordinated motion. [ NASA ] Boeing's MQ-25 autonomous aerial tanker continues its U.S. Navy carrier testing. [ Boeing ] Sphero Sports is built for sports foundations, schools, and CSR-driven organizations to teach STEM subjects. Sphero Sports gets students excited about STEM education and proactively supports educators and soccer foundation staff to become comfortable in learning and teaching these critical skills. [ Sphero ] Adibot-A is Ubtech Robotics' fully loaded autonomous disinfection solution, which can be programmed and mapped to independently navigate one or multiple floor plans. [ UBTECH ] Survice Engineering Co. was proud to support the successful completion of the Unmanned Logistics System–Air (ULS-A) Joint Capability Technology Demonstration (JCTD) program as the lead system integrator. We worked with the U.S. government, leaders in autonomous unmanned systems, and our warfighters to develop, test, and evaluate the latest multirotor VTOL platforms and technologies for assured logistics resupply at the forward edge of the battlefield. [ SURVICE ] via [ Malloy Aeronautics ] Thanks, Chris! Yaqing Wang from JHU's Terradynamics Lab gives a talk on trying to make a robot that is anywhere near as talented as a cockroach. [ Terradynamics Lab ] In episode one of season two of the Robot Brains podcast, host Pieter Abbeel is joined by guest (and close collaborator) Sergey Levine, professor at UC Berkeley, EECS. Sergey discusses the early years of his career, how Andrew Ng influenced his interest in machine learning, his current projects, and his lab's recent accomplishments. [ The Robot Brains ] Thanks, Alice!
  • Learn About the Candidates Running for 2023 President-Elect
    Jan 13, 2022 11:00 AM PST
    The IEEE Board of Directors has nominated Life Fellow Thomas Coughlin and Senior Members Kathleen Kramer and Maike Luiken as candidates for IEEE president-elect. IEEE Life Fellow Kazuhiro Kosuge is seeking to be a petition candidate. Other members who want to become a petition candidate still may do so by submitting their intention to elections@ieee.org by 8 April. The winner of this year’s election will serve as IEEE president in 2024. Life Fellow Thomas Coughlin Nominated by the IEEE Board of Directors Tom CoughlinHarry Who Photography Coughlin is founder and president of Coughlin Associates, in San Jose, Calif., which provides market and technology analysis as well as data storage, memory technology, and business consulting services. He has more than 40 years of experience in the data storage industry and has been a consultant for more than 20 years. He has been granted six patents. Before starting his own company, Coughlin held senior leadership positions in Ampex, Micropolis, and SyQuest. He is the author of Digital Storage in Consumer Electronics: The Essential Guide, which is in its second edition. He is a regular contributor on digital storage for the Forbes blog and other news outlets. In 2019 he was IEEE-USA president as well as IEEE Region 6 director. He also was chair of the IEEE New Initiatives and Public Visibility committees. He was vice president of operations and planning for the IEEE Consumer Technology Society and served as general chair of the 2011 Sections Congress in San Francisco. He is an active member of the IEEE Santa Clara Valley Section, which he chaired, and has been involved with several societies, standards groups, and the IEEE Future Directions committee. As a distinguished lecturer for the Consumer Technology Society and IEEE Student Activities, he has spoken on digital storage in consumer electronics, digital storage and memory for artificial intelligence, and how students can make IEEE their “professional home.” Coughlin is a member of the IEEE–Eta Kappa Nu (IEEE-HKN) honor society. He has received several recognitions including the 2020 IEEE Member and Geographic Activities Leadership Award. Coughlin is active in several other professional organizations including the Society of Motion Picture and Television Engineers and the Storage Networking Industry Association. Senior Member Kathleen Kramer Nominated by the IEEE Board of Directors Kathleen KramerJT MacMillan Kramer is a professor of electrical engineering at the University of San Diego, where she served as chair of the EE department and director of engineering from 2004 to 2013. As director she provided academic leadership for all of the university’s engineering programs. Her areas of interest include multisensor data fusion, intelligent systems, and cybersecurity in aerospace systems. She has authored or co-authored more than 100 publications. Kramer has worked for several companies including Bell Communications Research, Hewlett-Packard, and Viasat. She served as the 2017–2018 director of IEEE Region 6 and was the 2019–2021 IEEE secretary. In that position, she chaired the IEEE Governance Committee and helped make major changes including centralizing ethics conduct reporting, strengthened processes to handle ethics and member conduct, and improved the process used to periodically review each of the individual committees and major boards of the IEEE. She has held several leadership positions in the IEEE San Diego Section, including chair, secretary, and treasurer. Her first position with the section was advisor to the IEEE University of San Diego Student Branch. Kramer is an active leader within the IEEE Aerospace and Electronic Systems Society. She currently heads its technical operations panel on cybersecurity. From 2016 to 2018 she served as vice president of education. She is a distinguished lecturer for the society and has given talks on signal processing, multisensor data fusion, and neural systems. Kramer serves as an IEEE commissioner within ABET, the global accrediting organization for academic programs in applied science, computing, engineering, and technology. She has contributed to several advances for graduate programs, cybersecurity, mechatronics, and robotics. Life Fellow Kazuhiro Kosuge Seeking petition candidacy Kazuhiro KozugeMajesty Professional Photo Kosuge is a professor of robotic systems at the University of Hong Kong’s electrical and electronic engineering department. He has been conducting robotics research for more than 35 years, has published more than 390 technical papers, and has been granted more than 70 patents. He began his engineering career as a research staff member in the production engineering department of Japanese automotive manufacturer Denso. After two years, he joined the Tokyo Institute of Technology’s department of control engineering as a research associate. In 1989 and 1990, he was a visiting research scientist at MIT. After he returned to Japan, he began his academic career at Nagoya University as an associate professor. In 1995 Kosuge left Nagoya and joined Tohoku University, in Sendai, Japan, as a faculty member in the machine intelligence and system engineering department. He is currently director of the university’s Transformative AI and Robotics International Research Center. An IEEE-HKN member, he has held several IEEE leadership positions including 2020 vice president of Technical Activities, 2015–2016 Division X director, and 2010–2011 president of the Robotics and Automation Society. He has served in several advisory roles for Japan, including science advisor to the Ministry of Education, Culture, Sports, Science, and Technology’s Research Promotion Bureau from 2010 to 2014. He was a senior program officer of the Japan Society for the Promotion of Science from 2007 to 2010. In 2005 he was appointed as a Fellow of the Japan Science and Technology Agency’s Center for Research and Development Strategy. Among his honors and awards are the purple-ribbon Medal of Honor in 2018 from the emperor of Japan. To sign Kosuge’s petition, click here. Senior Member Maike Luiken Nominated by the IEEE Board of Directors Maike LuikenHeather O’Neil/Photos Unlimited Luiken’s career in academia spans 30 years, and she has more than 20 years of experience in industry. She is co-owner of Carbovate Development, in Sarnia, Ont., Canada, and is managing director of its R&D department. She also is an adjunct research professor at Western University in London, also in Ontario. Her areas of interest include power and energy, information and communications technology, how progress in one field enables advances in other disciplines and sectors, and how the deployment of technologies contributes—or doesn’t contribute—to sustainable development. In 2001 she joined the National Capital Institute of Telecommunications in Ottawa as vice president of research alliances. There she was responsible for a wide area test network and its upgrades. While at the company, she founded two research alliance networks that spanned across industry, business, government, and academia in the areas of wireless and photonics. She joined Lambton College, in Sarnia, in 2005 and served as dean of its technology school as well as of applied research and innovation. She led the expansion of applied research conducted at the school and helped Lambton become one of the top three research colleges in Canada. In 2013 she founded the Bluewater Technology Access Centre (now the Lambton Manufacturing Innovation Centre). It provides applied research services to industry while offering students and faculty opportunities to develop solutions for industry problems. Luiken, an IEEE-HKN member, was last year’s vice president of IEEE Member and Geographic Activities. She was president of IEEE Canada in 2018 and 2019, when she also served as Region 7 director. She has served on numerous IEEE boards and committees including the IEEE Board of Directors, the Canadian Foundation, Member and Geographic Activities, and the Internet Initiative.
  • Unitree’s AlienGo Quadruped Can Now Wield a Lightsaber
    Jan 13, 2022 08:19 AM PST
    Unitree Robotics, well known for providing affordable legged robots along with questionable Star Wars–themed promotional videos, has announced a brand-new, custom-made, 6-degree-of-freedom robotic arm intended to be mounted on the back of its larger quadrupeds. Also, it will save humanity from Sith from Mars, or something. This, we should point out, is not the first time Unitree has used the Force in a promotional video, although its first attempt was very Dark Side and the second attempt seemed to be mostly an apology for the first. The most recent video here seems to have landed squarely on the Light Side, which is good, but I’m kinda confused about the suggestion that the baddies come from Mars (?) and most humans are killed (??) and the answer is some sort of “Super AI” (???). I guess Unitree will have to release more products so that we can learn how this story ends. Anyway, about the arm: There are two versions, the Z1 Air and the Z1 Pro, built with custom motors using harmonic reducers for low backlash and torque control. They are almost exactly the same, except that the Pro weighs 4.3 kilograms rather than 4.1 kg, and has a payload of 3–5 kg rather than 2 kg. Max reach is 0.7 meters, with 0.1 millimeter repeatability. The price for the Air version is “about $6600,” and it’s compatible with “other mobile robots” as well. It’s important to note that just having an arm on a robot is arguably the easy part—it’s using the arm that’s the hard part, in the sense that you have to program it to do what you need it to do. A strong, lightweight, and well-integrated arm certainly makes that job easier, but it remains to be seen what will be involved in getting the arm to do useful stuff. I don’t want to draw too many comparisons to Boston Dynamics here, but Spot’s arm comes with autonomous and semi-autonomous behaviors built-in, allowing otherwise complex actions to be leveraged by commercial end users. It’s not yet clear how Unitree is handling this. We’re at the point now with robots in general that in many cases, software is the differentiator rather than hardware, and you get what you pay for. That said, sometimes what you want or need is a more affordable system to work with, and remember that Unitree’s AlienGo costs under $10K. There’s certainly a demand for affordable hardware, and while it may not be ready to be dropped into commercial applications just yet, it’s good to see options like these on the market.
  • Physicists Spin Up Quantum Tornadoes
    Jan 13, 2022 06:00 AM PST
    Shrink down to the level of atoms and you enter the quantum world, so supremely weird that even a physicist will sometimes gape. Hook that little world to our big, classical one, and a cat can be both alive and dead (sort of). “If you think you understand quantum mechanics, you don’t understand quantum mechanics,” said the great Richard Feynman, four decades ago. And he knew what he was talking about (sort of). Now comes a report on a quantum gas, called a Bose-Einstein condensate, which scientists at the Massachusetts Institute of Technology first stretched into a skinny rod, then rotated until it broke up. The result was a series of daughter vortices, each one a mini-me of the mother form. The research, published in Nature, was conducted by a team of scientists affiliated with the MIT-Harvard Center for Ultracold Atoms and MIT’s Research Laboratory of Electronics. The rotating quantum clouds, effectively quantum tornadoes, recall phenomena seen in the large-scale, classical world that we are familiar with. One example would be so-called Kelvin-Helmholtz clouds, which look like periodically repeating, serrated cartoon images of waves on the ocean. These wave-shaped clouds, seen over an apartment complex in Denver, exhibit what’s called Kelvin-Helmholtz instability.Rick Duffy/Wikipedia The way to make quantum cloud vortices, though, involves more lab equipment and less atmospheric wind shear. “We start with a Bose-Einstein condensate, 1 million sodium atoms that share one and the same quantum-mechanical wave function,”…, says Martin Zwierlein, a professor of physics at MIT. The same mechanism that confines the gas—an atom trap, made up of laser beams—allows the researchers to squeeze it and then spin it like a propeller. “We know what direction we’re pushing, and we see the gas getting longer,” he says. “The same thing would happen to a drop of water if I were to spin it up in the same way—the drop would elongate while spinning.” What they actually see is effectively the shadow cast by the sodium atoms as they fluoresce when illuminated by laser light, a technique known as absorption imaging. Successive frames in a movie can be captured by a well-placed CCD camera. At a particular rotation rate, the gas breaks up into little clouds. “It develops these funny undulations—we call it flaky, then becomes even more extreme. We see how this gas ‘crystalizes’ in a chain of droplets—in the last image there are eight droplets.” Why settle for a one-dimensional crystal when you can go for two? And in fact the researchers say they have done just that, in as yet unpublished research. That a rotating quantum gas would break into blobs had been predicted by theory—that is, one could infer that this would happen from earlier theoretical work. “We in the lab didn’t expect this—I was not aware of the paper; we just found it,” Zwierlein says. “It took us a while to figure it out.” The crystalline form appears clearly in a magnified part of one of the images. Two connections, or bridges, can be seen in the quantum fluid, and instead of the single big hole you’d see in water, the quantum fluid has a whole train of quantized vortices. In a magnified part of the image, the MIT researchers found a number of these little holelike patterns, chained together in regularly repeating fashion. “It’s similar in what happens when clouds pass each other in the sky,” he says. “An originally homogeneous cloud starts forming successive fingers in the Kelvin-Helmholtz pattern.” Very pretty, you say, but surely there can be no practical application. Of course there can; the universe is quantum. The research at MIT is funded by DARPA—the Defense Research Advanced Project Agency—which hopes to use a ring of quantum tornadoes as fabulously sensitive rotation sensors. Today if you’re a submarine lying under the sea, incommunicado, you might want to use a fiber optic gyroscope to detect slight rotational movement. Light travels in both one way and the other in the fiber, and if the entire thing is spinning, you should get an interference pattern. But if you use atoms rather than light, you should be able to do the job better, because atoms are so much slower. Such a quantum-tornado sensor could also measure slight changes in the earth’s rotation, perhaps to see how the core of the earth might be affecting things. The MIT researchers have gone far down the rabbit hole, but not quite to the bottom of it. Those little daughter tornadoes can be confirmed as still being Bose-Einstein condensates because even the smallest ones still have about 10 atoms apiece. If you could get down to just one per vortex, you’d have the quantum Hall effect, which is a different state of matter. And with two atoms per vortex, you’d get a “fractional quantum Hall” fluid, with each atom “doing its own thing, not sharing a wave function,” Zwierlein says. The quantum Hall effect is now used to define the ratio of Planck’s constant divided by the charge of the electron squared (h/e2)—a number called the von Klitzing constant—which is about as basic as basic physics gets. But this effect is still not fully understood. Most studies have focused on the behavior of electrons, and the MIT researchers are trying to use sodium atoms as stand-ins, says Zwierlein. So although they’re not all the way to the bottom of the scale yet, there’s plenty of room for discovery on the way to the bottom. As Feynman also might have said (sort of).
  • Adhesives Gain Popularity for Wearable Devices
    Jan 12, 2022 07:00 AM PST
    This is a sponsored article brought to you by Master Bond. Master Bond adhesive formulations provide solutions for challenging assembly applications in manufacturing electronic wearable devices. Product formulations include epoxies, silicones, epoxy-polyurethane hybrids, cyanoacrylates, and UV curing compounds. There are some fundamental things to consider when deciding what is the right adhesive for the assembly of electronic wearable devices. Miniaturization of devices, and the need to meet critical performance specifications with multiple substrates, require an analysis of which chemical composition is most suitable to satisfy the required parameters. These preliminary decisions are often predicated on the tradeoffs between different adhesive chemistries. They may vary widely, and in many cases are essential in achieving the needed goals in adhering parts and surfaces properly. About Master Bond EP37-3FLF Master Bond EP37-3FLF is an exceptionally flexible epoxy compound that forms high strength bonds that stand up well to physical impact and severe thermal cycling and shock, making it ideal for e-textile applications. Because it is flexible and produces a lower exotherm — heat released during the polymerization process — than conventional epoxy systems, EP37-3FLF lessens the stress on sensitive electronic components during cure. Reducing stress during cure is essential for protecting fragile die and other components in ultrathin, flexible electronic packages. EP37-3FLF bonds well to a variety of substrates, including metals, composites, glass, ceramics, rubber, and many plastics. It offers superior electrical insulation properties, outstanding light transmission, especially in the 350- to 2000-nm range, and is serviceable at temperatures from 4K to 250°F. EP37-3FLF can be cured in 2-3 days at room temperature or in 2-3 hours at 200°F. Optimal properties are achieved by curing overnight at room temperature followed by an additional 1-2 hours at 200°F. Master Bond EP37-3FLF was selected as one of six adhesives tested in a study of flexible electronic packaging for e-textiles conducted at the University of Southampton. Learn more about Master Bond EP37-3FLF → The shape of the wearable device, flexing and bending requirements, joining similar or dissimilar substrates, how long it will be worn, and where it will be worn, are some of the factors that are a prerequisite in deciding the type of adhesive. The types of stresses the device will be exposed to and the environmental conditions are also consequential. Viscosity, cure speed, and gel time, working life, and pot life are significant from a processing standpoint. Adhesives are gaining popularity for wearable electronic devices because many provide structural integrity, good drop, shock, impact performance, thermal stability, and resistance to moisture, fluids such as sunscreen oil, soda, water immersion, sweat, as well as normal wear and tear. Specific grades feature good electrical and thermal conductivity, bond well to dissimilar substrates, minimize stress, have high elongation or flexibility and can be applied in ultra small areas for miniaturized designs. Special, dual curing products have a UV tacking capability combined with a secondary heat cure mechanism for fast cures. User friendly solvent and lead free compositions have low halogen content, excellent thermal cycling capability and adhere well to metals, composites, many plastics, fabrics. Specific Master Bond adhesives pass USP Class VI and ISO 10993-5 standards for biocompatibility. These may be utilized in wearable, invasive, and non-invasive medical sensors used for surgeries, diagnostics, therapeutics and in monitoring systems. Some prominent applications range from sleep apnea therapy devices, dialysis machines, videoscopes, infusion pumps, monitoring equipment, and respiratory equipment to blood pressure monitoring, instruments and body temperature measurement devices. Adhesives are gaining popularity for wearable electronic devices because they provide good structural integrity, impact performance, thermal stability, and resistance to moisture as well as wear and tear. Mobile wellness wearable sensors have been instrumental in monitoring our fitness, calorie/burn consumption, and activity levels. Through the use of many different polymeric systems including many that contain nanofillers, Master Bond has provided medical sensor manufacturers with adhesives that aid in the design of miniaturized, lighter weight, lower power devices. Several case studies have cited using Master Bond adhesives in their medical sensors. One includes researchers at The University of Tennessee; they used EP30Med in their measurement tools and gauges for their medical device applications. EP30Med was chosen for its low viscosity, non-rapid set up time, USP Class VI approval and other performance properties. Another case study involves electronic textile (e-textile) technology, in which microelectronics are embedded into fabrics. In this study, the University of Southampton investigated the influence of material selection and component dimensions on the reliability of an e-textile packaging approach under development. The key measures of reliability considered in this study were the shear load and bending stresses of the adhesive and substrate layers of the flexible package. One of the adhesives tested included Master Bond EP37-3FLF.
  • Jet Fighter With a Steering Wheel: Inside the Augmented-Reality Car HUD
    Jan 12, 2022 06:00 AM PST
    The 2022 Mercedes-Benz EQS, the first all-electric sedan from the company that essentially invented the automobile in 1885–1886, glides through Brooklyn. But this is definitely the 21st century: Blue directional arrows seem to paint the pavement ahead via an augmented-reality (AR) navigation system and color head-up display, or HUD. Digital street signs and other graphics are superimposed over a camera view on the EQS’s much-hyped “Hyperscreen”—a 142-centimeter (56-inch) dash-spanning wonder that includes a 45-cm (17.7-inch) OLED center display. But here’s my favorite bit: As I approach my destination, AR street numbers appear and then fade in front of buildings as I pass, like flipping through a virtual Rolodex; there’s no more craning your neck and getting distracted while trying to locate a home or business. Finally, a graphical map pin floats over the real-time scene to mark the journey’s end. It’s cool stuff, albeit for folks who can afford a showboating Mercedes flagship that starts above US $103,000 and topped $135,000 in my EQS 580 test car. But CES 2022 in Las Vegas saw Panasonic unveil a more-affordable HUD that it says should reach a production car by 2024. Head-up displays have become a familiar automotive feature, with a speedometer, speed limit, engine rpms, or other information that hovers in the driver’s view, helping keep eyes on the road. Luxury cars from Mercedes, BMW, Genesis, and others have recently broadened HUD horizons with larger, crisper, more data-rich displays. Mercedes Benz augmented reality navigation youtu.be Panasonic, powered by Qualcomm processing and AI navigation software from Phiar Technologies, hopes to push into the mainstream with its AR HUD 2.0. Its advances include an integrated eye-tracking camera to accurately match AR images to a driver’s line of sight. Phiar’s AI software lets it overlay crisply rendered navigation icons and spot or highlight objects including vehicles, pedestrians, cyclists, barriers, and lane markers. The infrared camera can monitor potential driver distraction, drowsiness, or impairment, with no need for a standalone camera as with GM’s semiautonomous Super Cruise system. Panasonic's AR HUD system includes eye-tracking to match AR images to the driver's line of sight. Panasonic Andrew Poliak, CTO of Panasonic Automotive Systems Company of America, said the eye tracker spots a driver’s height and head movement to adjust images in the HUD’s “eyebox.” “We can improve fidelity in the driver’s field of view by knowing precisely where the driver is looking, then matching and focusing AR images to the real world much more precisely,” Poliak said. For a demo on the Las Vegas strip, using a Lincoln Aviator as test mule, Panasonic used its SkipGen infotainment system and a Qualcomm Snapdragon SA8155 processor. But AR HUD 2.0 could work with a range of in-car infotainment systems. That includes a new Snapdragon-powered generation of Android Automotive—an open-source infotainment ecosystem, distinct from the Android Auto phone-mirroring app. The first-gen, Intel-based system made an impressive debut in the Polestar 2, from Volvo’s electric brand. The uprated Android Automotive will run in 2022’s lidar-equipped Polestar 3 SUV—an electric Volvo SUV—and potentially millions of cars from General Motors, Stellantis, and the Renault-Nissan-Mitsubishi alliance. Gene Karshenboym helped develop Android Automotive for Volvo and Polestar as Google’s head of hardware platforms. Now, he’s chief executive of Phiar, a software company in Redwood, Calif. Karshenboym said AI-powered AR navigation can greatly reduce a driver’s cognitive load, especially as modern cars put ever more information at their eyes and fingertips. Current embedded navigation screens force drivers to look away from the road and translate 2D maps as they hurtle along. “It’s still too much like using a paper map, and you have to localize that information with your brain,” Karshenboym says. In contrast, following arrows and stripes displayed on the road itself—a digital yellow brick road, if you will—reduces fatigue and the notorious stress of map reading. It’s something that many direction-dueling couples might give thanks for. “You feel calmer,” he says. “You’re just looking forward, and you drive.” Street testing Phiar's AI navigation engine youtu.be The system classifies objects on a pixel-by-pixel basis at up to 120 frames per second. Potential hazards, like an upcoming crosswalk or a pedestrian about to dash across the road, can be highlighted by AR animations. Phiar’s synthetic model trained its AI for snowstorms, poor lighting, and other conditions, teaching it to fill in the blanks and create a reliable picture of its environment. And the system doesn’t require granular maps, monster computing power, or pricey sensors such as radar or lidar. Its AR tech runs off a single front-facing, roughly 720p camera, powered by a car’s onboard infotainment system and CPU. “There’s no additional hardware necessary,” Karshenboym says. The company is also making its AR markers appear more convincing by “occluding” them with elements from the real world. In Mercedes’s system, for example, directional arrows can run atop cars, pedestrians, trees, or other objects, slightly spoiling the illusion. In Phiar’s system, those objects can block off portions of a “magic carpet” guidance stripe, as though it were physically painted on the pavement. “It brings an incredible sense of depth and realism to AR navigation,” Karshenboym says. Once visual data is captured, it can be processed and sent anywhere an automaker chooses, whether a center display, a HUD, or passenger entertainment screens. Those passenger screens could be ideal for Pokémon-style games, the metaverse, or other applications that combine real and virtual worlds. Poliak said some current HUD units hog up to 14 liters of volume in a car. A goal is to reduce that to 7 liters or less, while simplifying and cutting costs. Panasonic says its single optical sensor can effectively mimic a 3D effect, taking a flat image and angling it to offer a generous 10- to 40-meter viewing range. The system also advances an industry trend by integrating display domains—including a HUD or driver’s cluster—in a central, powerful infotainment module. “You get smaller packaging and a lower price point to get into more entry-level vehicles, but with the HUD experience OEMs are clamoring for,” Poliak said.
  • Why IoT Sensors Need Standards
    Jan 11, 2022 11:00 AM PST
    Sensors traditionally have been used for camera imaging, as well as communicating information about humidity, temperature, motion, speed, proximity, and other aspects of the environment. The devices have become key enablers for a host of new technologies essential to business and to everyday life, from turning on a light switch to managing one’s health. Several factors are fueling sensors’ growth, including miniaturization, increased functionality, and higher levels of integration into electronic circuitry. There are also greater levels of automation being incorporated into products and systems, such as with Internet of Things and Industrial Internet of Things applications. Prominent users of sensors include the defense, energy, health care, and transportation industries. The global sensor market is large and growing fast. By one estimate, it is projected to reach US $346 billion in sales by 2028, up from $167 billion in 2019. SAFE AND RELIABLE APPLICATIONS As the sensor industry races to take advantage of market opportunities, the need to ensure the devices will operate safely and reliably is a growing concern. In the energy industry, for example, drill rigs for oil and gas exploration are now equipped with sensors to achieve optimal, safe performance at the lowest cost possible. The sensors must operate under harsh environmental conditions. Their failure could result in a rig being taken out of service, leading to significant, costly downtime. In industrial applications, worker safety would be compromised if gas sensors fail to detect the presence of toxic fumes. If the light detection and ranging remote-sensing system lidar fails in semiautonomous vehicles, they will be unable to function properly. Lidar is fundamental to advanced driver-assistance systems (ADAS). Because there are now thousands of sensor products on the market, adherence to standards that could improve their performance or accelerate development of new applications has grown in importance, as has the need for independent conformity and certification protocols. It has become challenging to effectively deploy sensors in complex IoT and IIoT applications given the interoperability issues that can arise when attempting to integrate systems from multiple vendors. Hardware compatibility, wired and wireless connectivity, security, software development, and cloud computing are key interoperability considerations as well as major issues in their own right. STANDARDS FOR IOT SENSORS For many years, the IEEE Standards Association (IEEE SA) has provided an open platform for users, those in academia, and technical experts from sensor manufacturers to come together to develop standards. Here are a few examples of IEEE standards and projects that have come from the collaboration. IEEE 2700-2017: IEEE Standard for Sensor Performance Parameter Definitions. A common framework for performance specification terminology, units, conditions, and limits for eight common sensor types. IEEE P1451.99: IEEE Standard for Harmonization of Internet of Things Devices and Systems. Current implementations of IoT devices and systems do not provide a way to share data or for an owner of such devices to authorize who has the right to control them or access the devices’ data. This standard will define a metadata bridge to facilitate IoT protocol transport for sensors, actuators, and other devices. It will address issues of security, scalability, and interoperability for cost savings and reduced complexity. The standard will offer a data-sharing approach that leverages current instrumentation and devices used in industry. IEEE P2020: Standard for Automotive System Image Quality. Most automotive camera systems have been developed independently, with no standardized reference point for calibration or measurement of image quality. This standard will address the fundamental attributes that contribute to image quality for ADAS applications; identify existing metrics and other useful information relating to the attributes; define a standardized suite of objective and subjective test methods; and specify tools and test methods to facilitate standards-based communication and comparison among system integrators and component vendors. IEEE P2520: Standard for Testing Machine Olfaction Devices and Systems. This standard aims to establish a collection of performance measurement methods and conformity assessment processes for e-nose devices that simulate human chemosensory responses with greater accuracy and precision. IEEE P2846: Assumptions for Models in Safety-Related Automated Vehicle Behavior. This standard will describe the minimum set of reasonable assumptions used in the development of safety-related models that are part of automotive ADAS. P2846 will consider rules of the road and their regional and temporal dependencies, which involve the impact of previous behavior. REGISTRY AND CERTIFICATION IEEE SA offers the IEEE Sensors Registry. The global Web-based service for manufacturers allows them to enter their sensors’ certifications, the standards they adhere to, and product data sheets so that buyers can find the right product. IEEE conducts an audit process on the submitted information to ensure its accuracy. WEBINARS AND ROUNDTABLE These free on-demand and upcoming webinars are available: Exploring the Importance of Sensors and Their Real-Life Applications in Life-Saving Wearable Devices. Path to Sensors Interoperability. The first in a series of new webinars, Are Sensors the Weakest Link to Cyber Attacks?, is scheduled for 2 February at 1 p.m. Eastern Time. IEEE SA plans to host an industry roundtable during the first quarter this year. It will focus on the creation of a comprehensive plan and timeline to address interoperability and cybersecurity issues for IoT sensor networks. Participants will include technology leaders from industry, government, and academia. Contact sensors-rt@ieee.org for more information.
  • Labrador Addresses Critical Need With Deceptively Simple Home Robot
    Jan 11, 2022 10:02 AM PST
    It’s not often that we see a home robot that manages to be both relatively affordable and realistically technologically achievable while also solving a clear and widespread need. The iRobot Roomba was arguably the first good example of such a robot, but I’m hard pressed to think of what the second good example would be, which is why this new home robot from Labrador Systems is so exciting—it’s essentially a semi-autonomous mobile table, which is poised to have a huge impact on people who could really use exactly that. If you’re not sure why anyone would need a semi-autonomous mobile table, then congratulations on likely being young enough and healthy enough that you aren’t currently experiencing significant mobility problems. But for many older adults as well as adults with disabilities, reliance on mobility aids (like canes or walkers) means that moving around while carrying things in hands or arms is difficult and dangerous, and for some, moving at all can be painful or exhausting. This can necessitate getting in-home help, or even having to leave your home completely, and this is the problem that Labrador wants to solve, or at least mitigate, with its mobile home robot. When we spoke to Labrador cofounder Mike Dooley back in October of 2019, here’s how he described the (then still secret) robot that he was working on: One of the core features of our robot is to help people move things where they have difficulty moving themselves, particularly in the home setting. That may sound trivial, but to someone who has impaired mobility, it can be a major daily challenge and negatively impact their life and health in a number of ways. Some examples we repeatedly hear are people not staying hydrated or taking their medication on time simply because there is a distance between where they are and the items they need. Once we have those base capabilities, i.e., the ability to navigate around a home and move things within it, then the robot becomes a platform for a wider variety of applications. Two years later, this is the Labrador Retriever: “Our robots are designed to serve as an extra pair of hands and lighten the load of everyday tasks in the home.” —Mike Dooley The video above comes after several months of in-home testing of an alpha version of Labrador’s robot, which started back in February of 2021. “We saw usage rates as high as a hundred times a month,” Dooley told us. “All of the pilot users rated the robot highly, and two of them asked if they could invest in the company.” And honestly, I can totally understand why. Despite its apparent simplicity, this is quite possibly the most exciting home robot I’ve seen in years, in the sense that it’s not just useful, but also needed. For people with limited mobility, the Labrador robot offers a place to store and transport heavy items that might be impossible for those people to carry or move on their own. Items that are used regularly, like water or books or medications or whatever, can live semi-permanently in the storage area in the middle deck of the robot, and in total the robot can handle up to 25 pounds (11 kilograms) of whatever you can fit on it. One version of the robot is at a fixed height, while a slightly more expensive version can raise and lower the height by about one-third of a meter and do some other clever stuff that we'll get into in a moment. Autonomy is fairly straightforward. Labrador uses 3D visual simultaneous localization and mapping, or SLAM, combined with depth sensors and bumpers on all sides to navigate through home environments, managing (we’re told) tight spaces, ADA-compliant floor transitions, and low lighting conditions, although you’ll have to keep clutter and cords away from the robot’s path and possibly tape down troublesome carpet edges. When you first get a robot, a Labrador representative will (remotely) drive it around to build a map and to set up the “bus stops” where the robot can be sent. This greatly simplifies control, since the end user can then just speak a destination into a smartphone app or a voice assistant and the robot will make its way there, zero training time required. The robot can also be scheduled to be at specific spots at specific times. All-day battery life is achievable since the robot spends most of its time being a table and not moving. And it’ll even charge the user's small electronics on its lower deck. A quick word on privacy, because this is a mobile robot with cameras on it that has access to your home. Dooley tells us that aside from the setup and troubleshooting, Labrador’s robots are intended to be fully autonomous, without a human in the loop. Having a remote-access capability is certainly useful, and it’s easy to see how family members might be interested in being able to leverage the robot in this way in an emergency. But, there’s really no good way of giving someone on-demand remote access while also making sure that the robot respects your privacy at all times. This isn’t a problem that’s unique to Labrador, but Dooley says that the company plans to add a hardware or software switch that can enable or disable remote connectivity, which seems like a reasonable compromise. Of course, you can’t very well call a robot a “Retriever” if it doesn’t solve the problem of bringing things back to you without relying on a person on the other end to place those things onto the robot. I’ve lost count of the number of beer- (or whatever-) fetching robots that attempt to use manipulators to open a refrigerator and find and grasp something inside, which is a superhard problem that is years away from being solved in a practical in-home way. Labrador has quite cleverly side-stepped this problem through a minor amount of environmental modification, which is really the right way of going about introducing home robots right now. Labrador has designed a system of pallets and trays that allow its Retriever robot to carry out fetching tasks autonomously. Pallets can be attached to tables or countertops, and then the robot can be instructed to interface with a specific pallet and retrieve the tray. It takes a little bit of forethought, since obviously the robot can only fetch what’s on the tray and nothing else, but with a minimal amount of affordable infrastructure, the amount and variety of stuff that Retriever can bring to you is significantly increased. One use case that Labrador is actively developing is a pallet and tray that can work with a small refrigerator. This is going to take more significant hardware mods (like, a custom fridge that will presumably come with a significant added cost), but the idea is that a meal or two on a tray could be preloaded into the fridge, kept cold, and then requested at any time. You could also imagine a microwave modified to work in a similar way, so that meals could be heated on demand as well. Before we get too caught up in add-ons like fridges and microwaves that aren’t yet available, let’s talk about the cost. Arguably the biggest challenge that Labrador has to face is not a technical one (although there are plenty of technical challenges to overcome with any mobile home robot), but rather an economic one: Users have to be able to afford it. This is a bit more complicated for Labrador in particular, since the robot is designed for people with disabilities or people who are likely retired or both, and who therefore may not have the kind of disposable income that’s usually associated with a mobile home robot. Labrador’s base model robot, called Caddie (which has a fixed height and is not compatible with the pallet/tray system), will cost US $1500 plus $100 per month for 36 months. Retriever, which does the clever tray stuff, goes for $150 per month; once the robot has been paid off, it’s all yours, although there will likely continue to be a (lower) monthly fee for support and new features. This is not an insignificant cost, but the context is critical here—rather than thinking of the cost in a vacuum, it’s important to consider what services Labrador’s robots are potentially replacing. In-home care from humans is expensive. A disabled person might need help for only a couple hours a day, but the cost for that help could easily run into the hundreds of dollars per week. And even if they’re getting care from a human with whom they have a relationship, they’re not paying money for it, that care is still by no means free. There’s a mental cost to always asking for help. If you're in a situation where your spouse is your caregiver, it can be stressful for both individuals. And I think that's why we get this super strong reaction of, “this really gives me some degree of independence back.” We’re not trying to replace a person with our robot, we’re just trying to give people the choice, especially for simple things, of whether they ask for help or not. —Mike Dooley Consider also the cost of being restricted in what you can do by whether you have someone around to help you or not—like, imagine being only able to have lunch or do the laundry or even get a drink of water on someone else’s schedule. Labrador’s robots aren’t intended to replace the care of humans, but rather to extend the windows of time during which folks with mobility challenges can be safe and comfortable on their own. From that perspective, for many if not most users, Labrador’s robots will likely have no trouble paying for themselves in short order. Ideally, the relative difference in cost between a robot and in-home care from a professional human would provide an incentive for health care systems and insurance companies to provide subsidies, since it seems like it would improve care for members while lowering the overall cost. This hasn’t happened yet, but it’s something that Labrador is working on. To its credit, Labrador is also being very, very careful about the rollout of its robots. The robot I was introduced to in secret at CES 2020 was very similar to this prerelease version announced at CES 2022, and Labrador has spent the intervening years making sure that it’s manufacturable, supportable, and can work reliably in a wide variety of homes. But it’s very much still a prerelease version. Labrador doesn’t expect to be in full production until the second half of 2023. The company is taking fully refundable $250 reservations now, though, and if you’re lucky (and live in the right place) you might be able to get access to a beta unit earlier than that. For the rest of us, 2023 does seem like a long time to wait, but it’ll be absolutely worth it if Labrador can get this right.
  • Why Multi-Functional Robots Will Take Over Commercial Robotics
    Jan 11, 2022 08:36 AM PST
    This is a sponsored article brought to you by Avidbots. The days of having single-purpose robots for specific tasks are behind us. A robot must be multi-functional to solve today’s challenges, be cost-effective, and increase the productivity of an organization. Yet, most indoor autonomous mobile robots (AMRs) today are specialized, often addressing a single application, service, or market. These robots are highly effective at completing the task at hand, however, they are limited to addressing a single use case. While this approach manages development costs and complexity for the developer, it may not be in the best interest of the customer. To set the stage for increased growth, the commercial AMR market must evolve and challenge the status quo. A focus on integrating multiple applications and processes will increase overall productivity and efficiency of AMRs. The market for autonomous mobile robots is expected to grow massively, and at Avidbots we see a unique opportunity to offer multi-application, highly effective robotic solutions. Today, there are many application-specific AMRs solving problems for businesses. Common applications include indoor parcel delivery, security, inventory management, cleaning, and disinfection, to name a few. The market for these types of AMRs is expected to grow into the tens of billions by 2025 as projected by Verified Market Research. This is a massive opportunity for growth for the AMR industry. It is also interesting to note that the sensor set and autonomous navigation capabilities of the various single application indoor AMRs today are similar. Hence, there is an opportunity to combine useful functionalities into a single multi-application robot, and yet the industry as a whole has been slow to make such advancement. Today's Robots Focus on Single Tasks There’s never been a better time for the AMR industry to take strategic steps given the changes we’ve had to embrace as a result of the COVID-19 pandemic. In fact, there have been many robots brought to market recently that look to address disinfection, the majority of which have been single-purpose, including UVC robots. With heightened standards of cleanliness in mind, let’s consider the potential of extending a cleaning robot from its single-use to performing both floor cleaning and high-touch surface disinfection. In September 2021, Avidbots launched the Disinfection Add-On, expanding the functionality of the company’s fully autonomous floor-scrubbing robot, Neo. By simply adding a piece of hardware and pushing a software update, Avidbots' Neo, the floor-scrubbing robot, now serves multi-purposes. The Future: Multi-Purpose Robots Not only will multi-application robots like this example provide more value through additional convenience to end-customers; when compared to single application robots, the value derived also comes from the economic impact. The economics of multi-application robots are simple. Combining two applications on one robot can deliver significant cost savings versus running two full single-use robots. For example, the price to rent a disinfection-only robot or a cleaning-only robot is in the neighborhood of US $2,000–3,000 per month per robot. But Neo with its Disinfection Add-On extends beyond its primary function of floor cleaning to disinfect for a few hundred dollars per month. Disinfection is available at a cost that is around one-tenth of the price of a single-purpose disinfection robot or manual disinfection. These savings can only be realized since the main cleaning function already pays for the AMR itself and the disinfection is merely a hardware and software extension. There are other OEMs following this trend; Brain Corp. combines cleaning with shelf-scanning, leveraging existing autonomous floor-scrubbing robots as the platform. Similarly, Badger combines hazard analysis (spill detection, etc.) with a shelf-scanning robot as the platform. Meet Neo 2, a Fully Autonomous Robotic Floor Cleaner This video presents an overview of Neo 2, Avidbots' advanced robotic platform optimized for autonomous cleaning and sanitization. Neo is equipped with the Avidbots AI Platform featuring 10 sensors, resulting in 360° visibility and intelligent obstacle avoidance. Combined with integrated diagnostics and Avidbots Remote Assistance, Neo offers advanced autonomy, navigation, and safety capabilities. Video: Avidbots There are a few parallels between the current state of robotics today and the early computer industry of the 1970s. In the early '70s, when mainframes still dominated computer system sales, several manufacturers released low-cost desktop computers that were designed to support multiple applications, peripherals, and programming languages. The low cost of desktop computers, the key “killer-apps,” and the large number of potential applications resulted in large growth and the proliferation of desktop computers worldwide, which eventually overtook mainframe sales in 1984. As sales of AMRs increase and the cost of processing systems continue to drop, mass-produced AMR OEMs will likely be capable of delivering AMRs at a significantly lower price in the coming years. Computer systems like the NVIDIA Xavier NX, which are designed specifically for leading-edge robotic perception applications, paint a promising picture of the evolution of computer systems for indoor AMRs. We look forward to a day in the near future when indoor AMRs will be sold at much less than US $10,000. Lowering the cost of AMRs is certainly a key to enabling larger and faster growth in the industry. About Avidbots Avidbots is a robotics company with a vision to bring robotic solutions into everyday life to increase organizational productivity and to do that better than any other company in the world. Our groundbreaking product, the Neo autonomous floor scrubbing robot, is deployed around the world and trusted by leading facilities and building service companies. Headquartered in Kitchener, ON, Canada, Avidbots is offering comprehensive service and support to customers on five continents. Learn more about Avidbots → There is the open question of the “killer-app” in AMRs for commercial spaces. What application can best serve as a platform for multi-application robots? Cleaning is certainly a candidate given that it's a service needed in most indoor spaces and saves two to four hours per night of manual labor. However, there are other industries such as the hospitality and food-service industry where parcel delivery has seen large growth and success since it saves many hours daily. In the examples above, customers will still likely benefit from having multiple potential applications in their AMRs. While only time will tell how the industry will evolve, it's clear that delivering several applications with a single robot and at a much lower cost than multiple robots (or manual counterparts) has the potential to make AMRs more attractive. We can take the industry to new heights by continuing to push the boundaries, including developing multi-application robots that can be used across industries and allow organizations to focus on revenue-generating activities. Our industry-leading multi-application solution is growing and so is our team of Avidbotters, including robotics engineers. If you’re interested in learning more about Avidbots or exploring career opportunities visit Avidbots.
  • Building Better Qubits
    Jan 07, 2022 11:00 AM PST
    While growing up in Germany, Heike Riel helped her father design and build furniture in the family workshop. She says the experience taught her that “precision and creativity are necessary to build something excellent.” “Working as a furniture maker was actually a very nice experience because you built something that is high quality and lasts,” Riel says. “When I go back to my hometown, many of our clients still have the furniture that I helped build for them.” Woodworking also instilled in her a passion for mathematics and physics, she says, adding that she knew she someday would pursue a career in one of those fields. Today the IEEE senior member is head of science and technology at IBM Research in Zurich. She is also the lead for IBM Research Quantum Europe and Africa, a group that aims to create technologies in artificial intelligence, nanotechnology, quantum computing, and related fields. The IBM Fellow has helped develop several groundbreaking technologies including OLED displays. She has conducted research in semiconducting nanowires and other nanostructures, as well as molecular electronics. She has authored more than 150 publications and holds more than 60 patents. Riel is the recipient of this year’s IEEE Andrew S. Grove Award “for contributions to materials for nanoscale electronics and organic light-emitting devices.” The award is sponsored by the IEEE Electron Devices Society. “I couldn't believe that I was selected [to receive] this very prestigious award,” Riel says. “I feel very humbled and honored because I have great respect for Andrew S. Grove, who was a true technical and business leader in the semiconductor industry, and many people I admire have received this award.” THE FIRST OLED DISPLAY After completing a woodworking apprenticeship in 1989, Riel decided to pursue a master’s degree in physics. She graduated in 1997 from Friedrich-Alexander-Universität Erlangen-Nürnberg, in Germany. She joined IBM Research in Zurich in 1998 while pursuing her doctorate in physics in collaboration with the University of Bayreuth, also in Germany. Her research focused on the optimization of multilayer organic light-emitting devices to be used in displays. After earning her Ph.D. in 2003 she worked at the lab as a research staff member. Riel’s research helped explain the physics behind charge transport and recombination, which govern the operation of all electronic devices, as well as light outcoupling in organic semiconductors. “Back then, people didn’t believe it could be done, but that didn’t stop us.” Her findings helped improve the efficiency, color, and endurance of OLEDs, which made it possible for her and the team to develop the first 51-centimeter full-color active-matrix OLED display. The technology is made by placing thin films of light-emitting organic compounds between two conductors. When voltage is applied, a bright light is emitted from each individual pixel. OLEDs can be found in TV screens, tablets, and smartphones. “We had one year to scale organic LEDs to make a 20-inch display in three different colors with pixel sizes of 100 micrometers by 300 micrometers,” Riel said in a 2021 interview for IBM’s Research blog. “Back then [in the early 2000s], people didn’t believe it could be done, but that didn’t stop us.” She says it’s rewarding to have developed something consumers use every day. “When my husband bought our first OLED television, it was really exciting,” she recalls. “Suddenly I owned a product that is using technologies I developed.” NANOWIRE TRANSISTORS Riel went on to become head of IBM’s nanoscale electronics group, which develops semiconducting nanowires and nanostructures for transistors. She and her team helped develop the first vertical surround-gate nanowire field-effect transistor in 2006. Researchers around the world had been trying to reduce the size of transistors for decades. But each time the transistors were miniaturized, their performance decreased; eventually they couldn’t effectively control electric current. “It became clear,” Riel says, “that how we built transistors had to change in the early 2000s.” “We had to come up with new ideas for how to improve the quality of [transistors] when we make them smaller,” she says. “We explored and developed new materials and integration schemes for nanoscale electronics and new transistor architectures based on semiconducting nanowires.” Riel and her team implemented gate-all-around and cylindrical nanowires for transistors. Because the nanowires are cylindrical, the transistor gate can be wrapped around the nanowires—which allows better control of the current, according to a research paper authored by Riel and her colleagues. In 2017 IBM released a new transistor—the Nanosheet—that uses the concepts Riel says she and her team developed between 2005 and 2012. Each transistor is made up of three stacked horizontal silicon sheets, each a few nanometers thick and completely surrounded by a gate. Last year IBM unveiled the world’s first 2-nm node chip, which was based on Nanosheet technology. “IBM claims this new chip will improve performance by 45 percent using the same amount of power, or use 75 percent less energy while maintaining the same performance level, as today’s 7 nm-based chips,” an IEEE Spectrum article said. ENHANCING THE QUBIT Riel is currently conducting quantum-computing research. She and her team are developing qubits and related technologies. Classical computers switch transistors on and off to represent data as ones or zeros. Because of the nature of quantum physics, qubits can be in a state of superposition, whereby they are both 1 and 0 simultaneously, as explained in a 2020 IEEE Spectrum article. Quantum computers can perform some tasks far faster and more accurately than conventional machines. “We are trying to figure out whether a new material would make them function better and if [certain materials] could have advantages over today’s processors,” Riel says. Her team has been experimenting with silicon spin qubits and topological phenomena. She and her team are taking a holistic approach, she says, and are building a quantum system from the ground up—creating the qubit, quantum processor unit technology, control electronics, and software. In November the IBM team demonstrated the Eagle, a 127-qubit chip: the world’s first quantum processor to break the 100-qubit barrier. Her team also is working to find a good way to connect two quantum processors. In quantum computing, she says, transduction is necessary to transport information over a long distance from one processor to another. Quantum transduction is the process of converting quantum signals from a low-energy photon to a high-energy photon to protect its state during transmission. “To do this conversion, you need sophisticated technology,” Riel says. “We are exploring different approaches and figuring out which is the best and how we can achieve the specifications that you need for doing it.” CONNECTING THROUGH IEEE Riel says she joined IEEE in 2007 so she could contribute to the community, participate in conferences, and connect with other engineers. A member of the IEEE Electron Devices Society, she has helped to organize events including the IEEE European Solid-State Device Research Conference, the IEEE International Electron Devices Meeting, and the IEEE Symposium on VLSI Technology and Circuits. Riel says IEEE has enriched her career, allowing her to keep up with technology advances and to network with peers.
  • Video Friday: Welcome to 2022
    Jan 07, 2022 09:57 AM PST
    Your weekly selection of awesome robot videos Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): ICRA 2022: 23–27 May 2022, Philadelphia ERF 2022: 28–30 June 2022, Rotterdam, Germany Let us know if you have suggestions for next week, and enjoy today's videos. Happy Holidays from Voliro! [ Voliro ] Thanks, Daniel! Merry Christmas from the Autonomous Systems Lab! [ ASL ] Лаборатория робототехники Сбера сердечно поздравляет вас с наступающим новым годом! [ Sberbank Robotics Laboratory ] Thanks, Alexey and Mike! Holiday Greetings from KIMLAB! [ KIMLAB ] Thanks, Joohyung! Quebec is easy mode for wintery robot videos. [ NORLAB ] Happy New Year from Berkshire Grey! [ Berkshire Grey ] Introducing John Deere’s autonomous 8R Tractor for large-scale production. To use the John Deere autonomous tractor, a farmer only needs to transport the machine to a field and configure it for autonomous operation. Using John Deere Operations Center Mobile, he or she can swipe from left to right to start the machine. While the machine is working the farmer can leave the field to focus on other tasks, while monitoring the machine’s status from their mobile device. [ John Deere ] I appreciate the idea that this robot seems to have some conception of personal space and will react when that space is rudely violated. [ Engineered Arts ] Merry Christmas and Happy New Year from Xiaomi Robotics Lab! [ Xiaomi ] Thanks, Yangwei! We developed advanced neural control with proactive behavior learning and short-term memory for complex locomotion and lifelong adaptation of autonomous walking robots. The control method is inspired by a locomotion control strategy used by walking animals like cats, in which they use their short-term visual memory to detect an obstacle and take proactive steps to avoid colliding it. [ VISTEC ] Thanks, Poramate! Not totally sure what this is from Exyn, but I do like the music. [ Exyn ] Nikon, weirdly, seems to be getting into the computer vision space with a high-speed, high-accuracy stereo system. [ Nikon ] Drone Badminton enables people with low vision to play badminton again using a drone as a ball and a racket can move a drone. This has potential to diversify the physical activities available, and improve physical and mental health for people with low vision. [ Digital Nature Group ] The Manta Ray program seeks to develop unmanned underwater vehicles (UUVs) that operate for extended durations without the need for on-site human logistics support or maintenance. [ DARPA ] A year in the life of Agility Robotics. [ Agility Robotics ] A new fabrication technique, developed by a team of electrical engineers and computer scientists, produces low-voltage, power-dense artificial muscles that improve the performance of flying microrobots. [ MIT ] What has NASA’s Perseverance rover accomplished since landing on the surface of Mars in February 2021? Surface Operations Mission Manager Jessica Samuels reflects on a year filled with groundbreaking discoveries at Jezero Crater and counts up the rover's achievements. [ NASA ] Construction is one of the largest industries on the planet, employing more than 10M workers in the US each year. Dusty Robotics believes in a future where robots and automation are standard tools employed by the construction workforce to build buildings more efficiently, safer, and at lower cost. In this talk I'll tell the story of how Dusty Robotics originated, our journey through the customer discovery process, and our vision for how robotics will change the face of construction. [ Dusty Robotics ]
  • E-Waste Is a Cybersecurity Problem, Too
    Jan 07, 2022 07:17 AM PST
    Many of us have obsolete devices relegated to the backs of our drawers, little museums of the technology of days long past. These forgotten laptops and phones seem like merely quaint relics, but if they’re not disposed of correctly, they can leak two different but dangerous things: toxic chemicals and sensitive data. The world generated a record 53.6 million metric tons of electronic waste in 2019, up more than 21 percent over five years, according to the United Nations’ most recent assessment. Only about 17 percent of that e-waste was recycled, and what happens to the rest can be detrimental for both human health and privacy. A new systematic review by The Lancet found that “people living in e-waste exposed regions had significantly elevated levels of heavy metals and persistent organic pollutants,” and it advocated for “novel cost-effective methods for safe recycling operations…to ensure the health and safety of vulnerable populations.” John Shegerian couldn’t agree more. He’s the cofounder and CEO of ERI, one of the largest electronics recycling-and-disposal providers in the world, and the coauthor of ERI’s 2021 book The Insecurity of Everything: How Hardware Data Security Is Becoming the Most Important Topic in the World. We spoke with Shegerian about e-waste’s effect on the future of our world and our privacy, and the role engineers can play in solutions. The conversation has been edited for length and clarity. ERIJohn Shegerian, Chairman/CEO of ERI and coauthor of the 2021 book The Insecurity of Everything. The conclusion of the Lancet study surely isn’t a shock to you, but others might be surprised about the kinds of pollutants inside our old computers, phones, and TVs—and the danger they present when not handled responsibly. John Shegerian: When we got into the industry [in 2002], Al Gore had not yet won his awards for An Inconvenient Truth. There was no iPhone or Internet of Things. But [e-waste] was still already the fastest-growing solid waste stream in the world. Now, in 2022, electronic waste is the fastest-growing waste stream by an order of magnitude. A worker at a prominent New York City bank “threw his laptop in the trash in Manhattan and someone fished it out. On that laptop was information from the many clients of the entire banking firm—and the bank’s multibillion-dollar enterprise.” —John Shegerian People might say, how is that possible given that we’re talking more about environment and there are more companies like yours? The truth is, the magnitude of the problem grossly outstrips the amount of solutions. We have so, so, so many devices. And when [e-waste isn’t disposed of correctly], it can get put into a landfill, thrown into a river or a lake, or just buried. Sadly, it could also be sent to a country where they don't have the right tools or expertise to dismantle old electronics. Eventually the linings [of devices] break, and when they’re rained upon, the very toxic materials [they contain]—mercury, lead, arsenic, beryllium, cadmium—come out. If they get back into the land and water, it has very negative effects on the health of our vegetation, our animals, and our people. So unfortunately, no, I’m not surprised [by the Lancet study]. You founded ERI because of the environmental concern, but you and your team quickly came to realize the cybersecurity risk as well: Many of these tossed-out devices contain sensitive personal or professional data. Shegerian: Yes, we saw these little bread crumbs about data and privacy throughout the 2000s: the birth of Palantir, the founding of LifeLock, what we were seeing ourselves at ERI. Really, in 2012 I started speaking to companies about the need to “shred” data the way they shred sensitive papers. They looked at us like we were green Martians. Over the years, I spoke about it at conferences anyway, and at one of these in 2017, Robert Hackett from Fortune asked for an interview and wrote an article that ended with this line: “Turns out e-waste isn’t just an environmental menace, but a cybersecurity one too.” Five years of banging the drum, and thanks to this article, we were finally off to the races…comparatively. Comparatively. Because you find that people, both as individuals and on the enterprise level, aren’t taking the data risk seriously enough. How did that inspire The Insecurity of Everything? Shegerian: Technology is so ubiquitous that this is a societal problem we all have to reckon with. It’s much more serious than just affecting your family or your company. This is a problem of international magnitude, that has homeland-security risks around it. That’s why we wrote the book: The vast majority of our clients still were not listening. They just wanted us for environmental work but they weren’t really sold on the hardware data-destruction part of the work yet. We wanted to write this book to share some of examples of serious consequences—that this isn’t some remote, theoretical concern. Can you share some of those anecdotes? Shegerian: I once had a big, big bank call me up: “John, we’ve had a breach, but we don’t believe it’s phishing or software. We think it came from hardware.” I go out there and it turns out one of their bankers threw his laptop in the trash in Manhattan and someone fished it out. On that laptop was information from the many clients of the entire banking firm—and the bank’s multibillion-dollar enterprise. The liability, the data…God, just absolutely priceless. If it got into the wrong people’s hands, the ransom that could have been extracted was truly of huge magnitude. You also have situations like the federal government—I won’t say what branches—telling us: “We have all of these old electronics that are potentially data-heavy, and when companies like yours gave us quotes [for responsible recycling], it seemed kind of expensive. We were told to save money and we found someone to do it for free.” Free? Yeah, no. What happens is that a guy will pick up the devices for free, put them in a container, and sell them wholesale to the highest bidder. Lots of those buyers are harvesting the precious metals and materials out of old electronics—but there are also people adverse for homeland security who want to pull out the hard drives and find a way to harm us here in the U.S. or hold corporate data for ransom. From those examples you can see how you need to protect your financial and personal data on an individual level too. What do people need to know—and do—to avoid becoming one of these stories? Shegerian: It is crucial to make sure that if you’re giving [your device] to a retailer who has a take-back or trade-in program, vet them and make sure they’re using responsible recyclers. Make sure they guarantee you that all your data will be destroyed before they take your phone and resell it. If they won’t tell you, with radical transparency, who the vendor is handling the materials or where they’re going to go? Pass. Hard drives are wiped at ERI’s facilities.ERI For the engineers of today and tomorrow who are interested in this work, how can they be part of the solution? Shegerian: Engineers have been such important partners for us, whether it’s creating e-waste shredding machines or things like glass-cleaning technology helps us recycle materials. They’ve also helped us be the first to develop AI and robotics in our facility. So they could come work for someone like us and answer questions like, How do we recycle more of this material in a faster and better way, with less impact to the environment? On the other side, engineers are still going to be hired by great OEMs, whether tech or auto companies, and that’s beautiful because now they could design and engineer for circular economy behavior. They could create new products made of recycled copper, gold, silver, steel, plastics—keeping them out of our landfills. Engineers have a huge opportunity to help leave the world a better, safer, and cleaner place than we inherited. But everyone on Earth is a stakeholder in this. We all have to be part of the solution.
  • The Seabed Solution
    Jan 06, 2022 09:00 AM PST
    The three-year voyage of the HMS Challenger was one of the greatest scientific expeditions in an era with quite a few of them. The former warship departed England in 1872 with a complement of 237 on a mission to collect marine specimens and also to map and sample huge swaths of the seafloor. The ship traveled 125,936 kilometers, and the mission succeeded beyond the wildest dreams of its backers. It discovered 4,700 new marine species, the Mid-Atlantic Ridge, and the Mariana Trench. Its bathymetric data, collected laboriously with a weighted line, was used to make the seafloor maps that guided the route of an early transatlantic telegraph cable. But the crew’s most puzzling discovery was made on 18 February 1873, while dredging an abyssal plain near the Canary Islands. The dredging apparatus came up loaded with potato-size nodules; subsequent analysis found them to be rich in manganese, nickel, and iron. It was the first of many such hauls by the Challenger crew, from the Indian Ocean to the Pacific, where the dredges sometimes yielded a briny jumble of the dark-gray nodules, shark’s teeth, and, oddly, whale ear bones. Quite soon, we’re all going to find out whether existing technology can be used to harvest those nodules and recover their valuable metals at costs competitive with more traditional mining techniques. And the timing is hardly coincidental. Over the next decade, a great shift to electric vehicles is expected to drive up demand for cobalt, nickel, copper, and manganese—all key metals in lithium-ion batteries, and all present in minable quantities in seafloor nodules. Later this year, as David Schneider notes in “Deep-sea Mining Stirs Up Muddy Questions,” a Canadian firm called the Metals Company (formerly DeepGreen Metals) plans to begin testing a nodule-collecting system comprising a seafloor robotic collector vehicle connected to a mammoth surface support ship. It has been a long and twisty road from the initial discoveries by the Challenger. Nearly 90 years would go by before somebody would propose collecting the nodules on a mass scale. In the December 1960 issue of Scientific American, the mining engineer John L. Mero argued his case and triggered a substantial spending spree as oceanographic research institutes sought, successfully, to verify his claims. A patch of Pacific seabed could supply key metals for batteries for 250 million electric vehicles Still, it would be another half century before a startup, Nautilus Minerals, would try to make a go of large-scale deep-seabed mining. Nautilus’s idea wasn’t to collect nodules, though, but rather to cut and drill into crusty deposits near deep-sea thermal vents, where valuable metals and minerals have been deposited over many millennia. But after raising some US $686 million, building three large undersea drilling robots, and securing a license to mine the seabed off Papua New Guinea, Nautilus went bankrupt in November 2019. When it ceased operations, it hadn’t mined any metal ore at all. The Metals Company, too, faces headwinds. So far, the firm, which has raised some $265 million in funding, has negotiated exploration rights to three different regions in the Pacific totaling some 74,700 square kilometers of seabed. It’s converting a 228-meter former drill ship into a mining-support surface ship, and it’s also building the robotic vehicle that will suck up nodules off the seafloor at depths exceeding 4,000 meters. The company has competition: Belgium-based Global Sea Mineral Resources is also testing a robotic undersea-nodule collector and has plans to mine the same region of the vast Pacific abyssal plains, called the Clarion-Clipperton Zone, as the Metals Company. Conservationists are mobilizing against the plans. The Atlantic, The Guardian, and Nature have all published articles citing delicate marine ecosystems that could be threatened by the mining. At the same time, the International Energy Agency projects that 145 million electric vehicles will be on the road by 2030. Each one of them will have a battery containing quantities of cobalt, manganese, and nickel ranging from several kilograms to a couple of dozen kilograms each. The Metals Company claims that the metals content of the nodules in just its area of exploration in the Clarion Zone could supply 250 million EVs. Analysts believe that conventional surface mines could supply that much metal, but digging it out of the ground would not be pretty. The mining of cobalt, lithium, manganese, and nickel have all long been associated with environmental and human-rights disasters. Humanity has begun insisting on greater sustainability in countless industries. But in mining, at least, it may find the apt phrase is not so much “better angels” as “lesser evil.”
  • IEEE WIE Conference Will Explore the Future of Work
    Jan 05, 2022 11:00 AM PST
    The 2022 IEEE Women in Engineering International Leadership Conference is scheduled for 6 and 7 June in a hybrid format, with both in-person and virtual networking events. The in-person events are to take place at the San Diego Convention Center. The annual WIE ILC aims to support and sustain female leaders and technologists, especially mid- to senior-career workers. This year’s theme is Transforming Leadership. The hybrid format lets attendees and speakers decide how to attend based on their own risk assessment. Virtual attendees will be able to access livestreams and remote events online. All in-person sessions will be recorded and made available to virtual attendees at a later date. The past two conferences have been held virtually due to the COVID-19 pandemic, and they both were successful, reaching a large international audience. Proposals are being sought for this year’s keynote speakers and events, such as breakout sessions, panels, and workshops. The deadline is 1 February. A SESSION FOR EVERYONE In the past, the most popular WIE ILC sessions have been skill-building workshops for career management, breakout talks on new technologies, and executive leadership training sessions. They will be back this year as well. Leadership and career topics to be covered at this year’s conference include career management; the future of remote/hybrid work; and increasing inclusion, intersectionality, and representation. Artificial intelligence, machine learning, 5G, the Internet of Things, and the interface between business and technology will be explored in sessions and panels. This year the conference is introducing Birds of a Feather sessions, which are designed to provide a safe space for like-minded attendees to network. Session attendees can discuss topics that are affecting them, such as caregiving, technology integration, and balancing being an underrepresented minority with working as an engineer. Attendees who are interested in leading a conversation on a topic of their choice can submit their suggestion on the IEEE WIE ILC website. IEEE members can plan and host their own virtual global networking event during the conference. It can focus on specific IEEE regions or sections and can be conducted in various languages. Contact the conference committee staff if you’re interested in holding a networking event. FLEXIBLE PLATFORM WIE ILC’s virtual platform is flexible and interactive for attendees, speakers, and sponsors. It uses artificial intelligence to connect attendees with other participants who have similar interests. The platform also can identify and share program content that attendees might find interesting based on sessions they have attended. The AI engine can connect sponsors and attendees through its “matchmaking” capabilities, which consider what sessions participants are attending and their profile information. Attendees who have expressed interest in a specific technology or career in a particular engineering field will be matched with sponsors in that area so the two can talk in more detail. Interested sponsors should contact the conference committee staff. SPEAKERS NEEDED Past speakers have had diverse backgrounds, and the plan is to continue that trend this year. The WIE ILC is seeking experts who can talk about subjects such as the future of work, effectively leading dispersed teams, adapting to hybrid workplaces, and career transitions. The pandemic is causing many people around the world to change jobs and even careers, and the conference is looking for speakers who can help attendees through such transitions. Speaker proposals may be submitted on the IEEE WIE ILC website. Talks from last year’s conference are still available on IEEE.TV. They include keynotes from Qualcomm’s Susie Armstrong, Intel’s Sandra Rivera, and McAfee’s Lynne Doherty. One speaker from last year’s conference who resonated with many was Julie Coker from the San Diego Tourism Authority. She told the audience to “find their seat at the corporate table.” “If there is no table, make one,” Coker said. “If there is no chair, bring your own.” We hope to see you virtually or in San Diego.
  • Zapping the Brain and Nerves Could Treat Long COVID
    Jan 05, 2022 06:00 AM PST
    New Year’s Day marked a dispiriting milestone for one New Jersey woman: 20 months of symptoms of COVID-19. The woman, who asked for anonymity to protect her medical privacy, suffers from a variety of neurological problems that are associated with long COVID, including brain fog, memory problems, difficulty reading, and extreme fatigue. In her search for treatment, she came across neurologists at New York University (NYU) who were trying electrical neurostimulation for long COVID patients. She signed up for experimental treatments five days per week that send gentle electric currents through her skull and into her cortex. It might sound weird, she says, but the reality is quite mundane. “People ask me, ‘You’re putting electricity in your brain? Where do you go to do that?’ And I say, ‘I do it in my house, I just put on a headband and make a call.’ ” The woman was part of a wave of people who started turning up at NYU’s neurology clinic in the late spring of 2020, several months after the first wave of COVID-19 cases hit New York City. “They were saying, ‘I can’t function, I can’t return to work,’ ” remembers Leigh Charvet, a professor of neurology at NYU Grossman School of Medicine. To make matters worse, doctors had little to offer these patients. Even as the world continues to grapple with new waves of acute illness, doctors are trying to understand and find treatments for long COVID, which can trouble patients for many months after their recovery from the initial infection. The syndrome, technically known as post-acute sequelae of SARS-CoV-2 infection (PASC), is associated with a long list of possible symptoms, including heart palpitations, breathing problems, and a wide variety of neurological issues. “We need to do so much work to understand what long COVID is,” Charvet says. “But we also need to reach people now with something that we know is safe and deployable.” Researchers Step Up Neurostimulation refers to electrical stimulation of the brain or peripheral nerves with either implanted or external devices; it's part of a growing field that's sometimes called bioelectronic medicine or electroceuticals. When the pandemic hit, researchers who had been working on neurostimulation for other maladies looked for ways to help the medical response. “This was a chance for neuromodulation to step up,” says Marom Bikson, codirector of neural engineering at the City College of New York and cofounder of the neurotech company Soterix Medical, which supplied stimulation gear to several research groups. Some researchers began investigating whether neurostimulation could help with the acute phase of infection. In Brazil, Suellen Andrade of the Federal University of Paraiba recently concluded a study using transcranial direct current stimulation (tDCS) to help patients in the intensive care unit. While her team is still preparing a publication on the results, she says that patients who received the stimulation (instead of a sham treatment) required significantly less time on ventilators and were discharged sooner. Others, including Charvet, took on long COVID. The U.S. Food and Drug Administration (FDA) was seeking remote and scalable treatment options for COVID-19 patients, and actively solicited proposals for neurostimulation trials that could be carried out by patients in their own homes. While the trials so far have been very small, the results have been promising enough to support larger studies to optimize the technology and to test the efficacy of these treatments. Charvet has tried tDCS with a handful of people so far. A patient puts on an electrode-studded headband that’s attached to a controller and calls the study coordinator, who provides a unique code to enable that day’s stimulation. During the 20-minute stimulation session, the patient also does a therapeutic activity such as a cognitive game, and may also do some physical exercise after the session. Charvet says the research so far has been “a testing ground—it’s not scientific, it’s not controlled.” Patients have come to her for help with brain fog, fatigue, headaches, emotional dysregulation, and other problems, and she tweaks the treatment protocols based on each person’s symptoms. She’s now planning a larger trial with NYU patients that’s intended to optimize the technology for at-home treatments. The trial will debut a tDCS headband that also tracks heart-rate variability; she and her colleagues hope that this biomarker will serve as an indicator of the patient’s response to treatment. They’ll use a headset made by Soterix Medical that measures the impedance in the electrodes and translates that signal into heart-rate data. “What drives us is that there’s a tremendous unmet need,” Charvet says. “And our patients are getting better.” At the Medical University of South Carolina, psychiatry professor Mark George tried a different neurostimulation approach in a pilot study of 20 patients that he began in late 2020; his study used an at-home device that stimulated the vagus nerve through the ear. George’s team assembled a “real tough briefcase with a whole bunch of good stuff inside,’ ” he says, likening the equipment to Mission Impossible supplies. Each patient got an iPad for telemedicine consultations and for symptom surveys, the stimulation device, and “a portable ICU” with wearables that measured heart rate, oxygen saturation, and blood pressure. George’s patients did 1-hour sessions each morning and evening, six days per week, while seated and doing whatever they wished. “We showed you could do this kind of stimulation at home; the safety data was impeccable,” George says. “And we saw reductions in brain fog, improvements in energy, some improvement in anxiety.” He’s now applying for funding for a larger study. One of his patients, a woman in her 60s who asked to be identified only by her first name, Pam, says she suffered from brain fog, memory lapses, fatigue, and mood swings following her case of COVID-19, which sent her to the emergency room in April 2020. When she started the stimulation, she felt a lessening of the uncharacteristic depression and anger that had troubled her, she says. “When I started with the treatment, I felt a little brighter, more like myself,” Pam says. “I think I was a little better mentally.” Another participant, a woman in her 50s who asked to be identified only as Beth, spent 23 days in the hospital during her initial battle with COVID-19, including more than a week in the intensive care unit. A few weeks after she started the stimulation, “I noticed improvements in my headaches,” Beth says, “and also with the vertigo.” Both women say their symptoms returned when the study ended, although not with the same intensity. Disentangling Everything One of the challenges that researchers face as they investigate the utility of neurostimulation for long COVID is the diversity of symptoms that patients report. George says his study deliberately took a “shotgun approach,” enrolling patients with a variety of neurological symptoms and looking at who responded best to the treatment. More work is needed to clarify which stimulation methods are most effective for which subsets of patients. What’s more, there are a host of confounding factors at play, notes Jennifer Frontera, a professor of neurology at NYU Grossman School of Medicine. “It’s a very heterogenous group of people describing very heterogenous symptoms,” she says. NYU initially become a hub for research on long COVID because the hospital saw so many patients in the first wave of the pandemic and has tracked released patients over time. In September 2021, the National Institutes of Health put the institution in charge of a US $470 million grant to support large-scale studies of Long COVID. Frontera notes that a big part of that project, called the Researching COVID to Enhance Recovery (RECOVER) Initiative, will be disentangling everything. “We don’t have a medication for brain fog.” —Jennifer Frontera, NYU Frontera explains that some people dealing with long COVID may have experienced low levels of oxygen in their brains during their acute illness, while others may have immune systems that went into overdrive following COVID-19 infection. But others may be experiencing a worsening of underlying conditions such as mood disorders and dementia, and still others may be having symptoms that aren’t actually related to their COVID-19 infections. “Many people are sitting around their houses, they’re not out walking around,” she says. “Some people are more in tune with their bodies and are noticing things they never noticed before.” Even the weight gain that’s been so common during the pandemic can confuse matters since it can lead to sleep apnea, which in turn can cause sleep problems, fatigue, and headaches. To get a handle on the basics, Frontera and her colleagues conducted a study about health impacts of the pandemic, surveying 1,000 people whose demographics roughly matched those of the United States in terms of age, gender, and ethnicity. They didn’t ask participants if they’d been infected with COVID-19 until the end. They found that pandemic-related stress factors such as financial and relationship problems were equally predictive of anxiety, depression, and insomnia as a history of COVID-19 infection. However, a history of infection was more predictive of cognitive issues. Frontera doesn’t see the study’s result as undercutting the severity of the long COVID problem. She notes that the study found that 25 percent of people with a history of COVID-19 had symptoms that persisted beyond a month. “If you translate that out to the population of the United States, that would be 6 million people,” she says. She’s most troubled by the cognitive problems she’s seeing, she says: “We don’t have a medication for brain fog.” Frontera and her colleagues have also been following people who have been hospitalized because of COVID-19; they published a paper regarding the patient’s status six months after infection and recently submitted a paper with data from one year after infection. Even after one year, she says, 80 percent of those people were still experiencing symptoms, and 50 percent scored as abnormal on a cognitive screening tool. “That’s a lot of cognitive disability,” she says. Searching for the “Why” If neurostimulation does help with the neurological symptoms of long COVID, it’s not clear why. Stimulation with tDCS has been shown to increase “plasticity” in the brain, or the ability of the brain to make new connections between neurons; neuroplasticity is associated with learning, changing thought patterns, and rehabilitation after injury. Vagus-nerve stimulation has been shown to reduce inflammation in the body, which is a component of autoimmune disorders; if some long haulers are suffering from an overactive immune system, vagus-nerve stimulation could help. George in South Carolina hopes to collect biomarkers associated with inflammation in his next study to examine that possible connection. The researchers are hoping that larger studies will begin to shed light on the ways that neurostimulation impacts the neurology of people with long COVID. And if millions of people in the United States alone are in need of treatment, they may have an unprecedented opportunity for research. Marom Bikson of Soterix Medical notes that both the research field and the industry of neurostimulation is just getting started. “We don’t have Pfizers of neuromodulation,” he says, “but you can only imagine what would happen if it shows an effect on long COVID.” It could lead to millions of people having stimulators in their homes, he suggests, which could open other doors. “Once you start stimulating for long COVID, you can start stimulating for other things like depression,” he says. But he says it’s crucial to proceed cautiously and not make unsupported claims for neurostimulation’s powers. “Otherwise,” he says, “it could have the opposite effect.”
  • AI’s 6 Worst-Case Scenarios
    Jan 03, 2022 12:00 PM PST
    Hollywood’s worst-case scenario involving artificial intelligence (AI) is familiar as a blockbuster sci-fi film: Machines acquire humanlike intelligence, achieving sentience, and inevitably turn into evil overlords that attempt to destroy the human race. This narrative capitalizes on our innate fear of technology, a reflection of the profound change that often accompanies new technological developments. However, as Malcolm Murdock, machine-learning engineer and author of the 2019 novel The Quantum Price, puts it, “AI doesn’t have to be sentient to kill us all. There are plenty of other scenarios that will wipe us out before sentient AI becomes a problem.” “We are entering dangerous and uncharted territory with the rise of surveillance and tracking through data, and we have almost no understanding of the potential implications.” —Andrew Lohn, Georgetown University In interviews with AI experts, IEEE Spectrum has uncovered six real-world AI worst-case scenarios that are far more mundane than those depicted in the movies. But they’re no less dystopian. And most don’t require a malevolent dictator to bring them to full fruition. Rather, they could simply happen by default, unfolding organically—that is, if nothing is done to stop them. To prevent these worst-case scenarios, we must abandon our pop-culture notions of AI and get serious about its unintended consequences. 1. When Fiction Defines Our Reality… Unnecessary tragedy may strike if we allow fiction to define our reality. But what choice is there when we can’t tell the difference between what is real and what is false in the digital world? In a terrifying scenario, the rise of deepfakes—fake images, video, audio, and text generated with advanced machine-learning tools—may someday lead national-security decision-makers to take real-world action based on false information, leading to a major crisis, or worse yet, a war. Andrew Lohn, senior fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), says that “AI-enabled systems are now capable of generating disinformation at [large scales].” By producing greater volumes and variety of fake messages, these systems can obfuscate their true nature and optimize for success, improving their desired impact over time. The mere notion of deepfakes amid a crisis might also cause leaders to hesitate to act if the validity of information cannot be confirmed in a timely manner. Marina Favaro, research fellow at the Institute for Research and Security Policy in Hamburg, Germany, notes that “deepfakes compromise our trust in information streams by default.” Both action and inaction caused by deepfakes have the potential to produce disastrous consequences for the world. 2. A Dangerous Race to the Bottom When it comes to AI and national security, speed is both the point and the problem. Since AI-enabled systems confer greater speed benefits on its users, the first countries to develop military applications will gain a strategic advantage. But what design principles might be sacrificed in the process? Things could unravel from the tiniest flaws in the system and be exploited by hackers. Helen Toner, director of strategy at CSET, suggests a crisis could “start off as an innocuous single point of failure that makes all communications go dark, causing people to panic and economic activity to come to a standstill. A persistent lack of information, followed by other miscalculations, might lead a situation to spiral out of control.” Vincent Boulanin, senior researcher at the Stockholm International Peace Research Institute (SIPRI), in Sweden, warns that major catastrophes can occur “when major powers cut corners in order to win the advantage of getting there first. If one country prioritizes speed over safety, testing, or human oversight, it will be a dangerous race to the bottom.” For example, national-security leaders may be tempted to delegate decisions of command and control, removing human oversight of machine-learning models that we don’t fully understand, in order to gain a speed advantage. In such a scenario, even an automated launch of missile-defense systems initiated without human authorization could produce unintended escalation and lead to nuclear war. 3. The End of Privacy and Free Will With every digital action, we produce new data—emails, texts, downloads, purchases, posts, selfies, and GPS locations. By allowing companies and governments to have unrestricted access to this data, we are handing over the tools of surveillance and control. With the addition of facial recognition, biometrics, genomic data, and AI-enabled predictive analysis, Lohn of CSET worries that “we are entering dangerous and uncharted territory with the rise of surveillance and tracking through data, and we have almost no understanding of the potential implications.” Michael C. Horowitz, director of Perry World House, at the University of Pennsylvania, warns “about the logic of AI and what it means for domestic repression. In the past, the ability of autocrats to repress their populations relied upon a large group of soldiers, some of whom may side with society and carry out a coup d’etat. AI could reduce these kinds of constraints.” The power of data, once collected and analyzed, extends far beyond the functions of monitoring and surveillance to allow for predictive control. Today, AI-enabled systems predict what products we’ll purchase, what entertainment we’ll watch, and what links we’ll click. When these platforms know us far better than we know ourselves, we may not notice the slow creep that robs us of our free will and subjects us to the control of external forces. Mike McQuade 4. A Human Skinner Box The ability of children to delay immediate gratification, to wait for the second marshmallow, was once considered a major predictor of success in life. Soon even the second-marshmallow kids will succumb to the tantalizing conditioning of engagement-based algorithms. Social media users have become rats in lab experiments, living in human Skinner boxes, glued to the screens of their smartphones, compelled to sacrifice more precious time and attention to platforms that profit from it at their expense. Helen Toner of CSET says that “algorithms are optimized to keep users on the platform as long as possible.” By offering rewards in the form of likes, comments, and follows, Malcolm Murdock explains, “the algorithms short-circuit the way our brain works, making our next bit of engagement irresistible.” To maximize advertising profit, companies steal our attention away from our jobs, families and friends, responsibilities, and even our hobbies. To make matters worse, the content often makes us feel miserable and worse off than before. Toner warns that “the more time we spend on these platforms, the less time we spend in the pursuit of positive, productive, and fulfilling lives.” 5. The Tyranny of AI Design Every day, we turn over more of our daily lives to AI-enabled machines. This is problematic since, as Horowitz observes, “we have yet to fully wrap our heads around the problem of bias in AI. Even with the best intentions, the design of AI-enabled systems, both the training data and the mathematical models, reflects the narrow experiences and interests of the biased people who program them. And we all have our biases.” As a result, Lydia Kostopoulos, senior vice president of emerging tech insights at the Clearwater, Fla.–based IT security company KnowBe4, argues that “many AI-enabled systems fail to take into account the diverse experiences and characteristics of different people.” Since AI solves problems based on biased perspectives and data rather than the unique needs of every individual, such systems produce a level of conformity that doesn’t exist in human society. Even before the rise of AI, the design of common objects in our daily lives has often catered to a particular type of person. For example, studies have shown that cars, hand-held tools including cellphones, and even the temperature settings in office environments have been established to suit the average-size man, putting people of varying sizes and body types, including women, at a major disadvantage and sometimes at greater risk to their lives. When individuals who fall outside of the biased norm are neglected, marginalized, and excluded, AI turns into a Kafkaesque gatekeeper, denying access to customer service, jobs, health care, and much more. AI design decisions can restrain people rather than liberate them from day-to-day concerns. And these choices can also transform some of the worst human prejudices into racist and sexist hiring and mortgage practices, as well as deeply flawed and biased sentencing outcomes. 6. Fear of AI Robs Humanity of Its Benefits Since today’s AI runs on data sets, advanced statistical models, and predictive algorithms, the process of building machine intelligence ultimately centers around mathematics. In that spirit, said Murdock, “linear algebra can do insanely powerful things if we’re not careful.” But what if people become so afraid of AI that governments regulate it in ways that rob humanity of AI’s many benefits? For example, DeepMind’s AlphaFold program achieved a major breakthrough in predicting how amino acids fold into proteins, making it possible for scientists to identify the structure of 98.5 percent of human proteins. This milestone will provide a fruitful foundation for the rapid advancement of the life sciences. Consider the benefits of improved communication and cross-cultural understanding made possible by seamlessly translating across any combination of human languages, or the use of AI-enabled systems to identify new treatments and cures for disease. Knee-jerk regulatory actions by governments to protect against AI’s worst-case scenarios could also backfire and produce their own unintended negative consequences, in which we become so scared of the power of this tremendous technology that we resist harnessing it for the actual good it can do in the world. This article appears in the January 2022 print issue as "AI’s Real Worst-Case Scenarios."
  • Top Tech 2022: A Special Report
    Jan 03, 2022 09:45 AM PST
    At the start of each year, IEEE Spectrum attempts to predict the future. It can be tricky, but we do our best, filling the January issue with a couple of dozen reports, short and long, about developments the editors expect to make news in the coming year. This isn’t hard to do when the project has been in the works for a long time and is progressing on schedule—the coming first flight of NASA’s Space Launch System, for example. For other stories, we must go farther out on a limb. A case in point: the description of a hardware wallet for Bitcoin that the company formerly known as Square (which recently changed its name to Block) is developing but won’t officially comment on. One thing we can predict with confidence, though, is that Spectrum readers, familiar with the vicissitudes of technical development work, will understand if some of these projects don’t, in fact, pan out. That’s still okay. Engineering, like life, is as much about the journey as the destination. See all stories from our Top Tech 2022 Special Report ➞
  • Quantum Dots + OLED = Your Next TV
    Jan 03, 2022 08:00 AM PST
    For more than a decade now, OLED (organic light-emitting diode) displays have set the bar for screen quality, albeit at a price. That’s because they produce deep blacks, offer wide viewing angles, and have a broad color range. Meanwhile, QD (quantum dot) technologies have done a lot to improve the color purity and brightness of the more wallet-friendly LCD TVs. In 2022, these two rival technologies will merge. The name of the resulting hybrid is still evolving, but QD-OLED seems to make sense, so I’ll use it here, although Samsung has begun to call its version of the technology QD Display. To understand why this combination is so appealing, you have to know the basic principles behind each of these approaches to displaying a moving image. In an LCD TV, the LED backlight, or at least a big section of it, is on all at once. The picture is created by filtering this light at the many individual pixels. Unfortunately, that filtering process isn’t perfect, and in areas that should appear black some light gets through. In OLED displays, the red, green, and blue diodes that comprise each pixel emit light and are turned on only when they are needed. So black pixels appear truly black, while bright pixels can be run at full power, allowing unsurpassed levels of contrast. But there’s a drawback. The colored diodes in an OLED TV degrade over time, causing what’s called “burn-in.” And with these changes happening at different rates for the red, green, and blue diodes, the degradation affects the overall ability of a display to reproduce colors accurately as it ages and also causes “ghost” images to appear where static content is frequently displayed. Adding QDs into the mix shifts this equation. Quantum dots—nanoparticles of semiconductor material—absorb photons and then use that energy to emit light of a different wavelength. In a QD-OLED display, all the diodes emit blue light. To get red and green, the appropriate diodes are covered with red or green QDs. The result is a paper-thin display with a broad range of colors that remain accurate over time. These screens also have excellent black levels, wide viewing angles, and improved power efficiency over both OLED and LCD displays. Samsung is the driving force behind the technology, having sunk billions into retrofitting an LCD fab in Tangjeong, South Korea, for making QD-OLED displays While other companies have published articles and demonstrated similar approaches, only Samsung has committed to manufacturing these displays, which makes sense because it holds all of the required technology in house. Having both the OLED fab and QD expertise under one roof gives Samsung a big leg up on other QD-display manufacturers., Samsung first announced QD-OLED plans in 2019, then pushed out the release date a few times. It now seems likely that we will see public demos in early 2022 followed by commercial products later in the year, once the company has geared up for high-volume production. At this point, Samsung can produce a maximum of 30,000 QD-OLED panels a month; these will be used in its own products. In the grand scheme of things, that’s not that much. Unfortunately, as with any new display technology, there are challenges associated with development and commercialization. For one, patterning the quantum-dot layers and protecting them is complicated. Unlike QD-enabled LCD displays (commonly referred to as QLED) where red and green QDs are dispersed uniformly in a polymer film, QD-OLED requires the QD layers to be patterned and aligned with the OLEDs behind them. And that’s tricky to do. Samsung is expected to employ inkjet printing, an approach that reduces the waste of QD material. Another issue is the leakage of blue light through the red and green QD layers. Leakage of only a few percent would have a significant effect on the viewing experience, resulting in washed-out colors. If the red and green QD layers don’t do a good job absorbing all of the blue light impinging on them, an additional blue-blocking layer would be required on top, adding to the cost and complexity. Another challenge is that blue OLEDs degrade faster than red or green ones do. With all three colors relying on blue OLEDs in a QD-OLED design, this degradation isn’t expected to cause as severe color shifts as with traditional OLED displays, but it does decrease brightness over the life of the display. Today, OLED TVs are typically the most expensive option on retail shelves. And while the process for making QD-OLED simplifies the OLED layer somewhat (because you need only blue diodes), it does not make the display any less expensive. In fact, due to the large number of quantum dots used, the patterning steps, and the special filtering required, QD-OLED displays are likely to be more expensive than traditional OLED ones—and way more expensive than LCD TVs with quantum-dot color purification. Early adopters may pay about US $5,000 for the first QD-OLED displays when they begin selling later this year. Those buyers will no doubt complain about the prices—while enjoying a viewing experience far better than anything they’ve had before. Update 5 January 2022: At CES 2022, the annual consumer electronics show held in Las Vegas, three companies announced products incorporating QD-OLED technology, all using Samsung’s display hardware. Samsung unveiled a 65-inch QD-Display TV. Alienware introduced a gaming monitor. And Sony’s launched two Bravia XR A95K TVs. None of these companies have yet announced pricing.
  • Exoskeletons, Smart Rings, and Flying Cars Will Be at CES 2022
    Jan 03, 2022 12:00 AM PST
    Like many consumer technology professionals and members of the press, I had been planning to attend CES, the gigantic annual consumer electronics show, in person this year. That is, until about a week ago, when the current COVID-19 surge made travel, dining out, and spending time indoors surrounded by large numbers of strangers seem like bad ideas. Of course, exhibitors, including new and smaller companies, had been sending out press releases, hoping to entice media to tear themselves away from the attention-grabbing gigantic TV screens and media walls erected by the deep-pocketed consumer electronics behemoths. A number of gadgets touted in these emails did seem worth seeing—and in many cases trying—in person. Alas, that won’t be happening this week. So, from my inbox view of CES, here are my top three product categories that fall outside the consumer electronics mainstream. In each case, I received advance word from two companies with similar products, more than likely indicating the tip of a trend rather than being mere coincidence. Flying Cars All signs indicate that it will be 2023 or 2024 before any of the wannabe flying-car companies get real products off the ground, but two companies will be teasing CES attendees with flying-car prototypes. Tokyo-based SkyDrive promises a demonstration model (not yet autonomous) of its electric vertical-take-off-and-landing (VTOL) one-seater. The company is aiming to use the flying car as an air taxi at World Expo 2025 in Osaka. MACA, based in southeastern France, plans to give a sneak peek at its hydrogen-powered Carcopter at CES, with the first presentation of fully working models slated for the Paris Air Show in 2023. Alas, MACA did not ship its latest prototype to Las Vegas, but plans to use augmented reality to show off the current design. Sensor-laden Rings As the electronics in wearables get smaller, the devices have migrated from the wrist to the fingers.Circular Honestly, I’m perfectly happy with the wristband form of health-and-fitness wearables, but because consumer gadgets inevitably get bigger (TVs) and smaller (everything else), my next wearable will instead have to slide onto one of my tiny fingers. Two companies are bringing health-and-fitness rings to CES: Circular and Movano. They aren’t the first to cram biometric sensors into a rechargeable ring—others have tried, and Oura succeeded, in bringing such a product to market. The Oura Ring has been monitoring activity, heart rate, and temperature for several years now; a US $300 third-generation ring, released late last year, added a sensor to track blood oxygen levels. (It also added a $6 monthly subscription fee for new purchasers.) Oura is a successful enough product that it’s not surprising to see competitors attempting to enter the marketplace. Circular, based in Paris, hopes to attract consumers to its smart ring by mapping vital signs, including blood oxygen levels, to daily activities. It has also added the ability to vibrate, allowing push notifications when heart rate or blood oxygen numbers are problematic. The company has yet to release specifics on pricing. Movano, based in Pleasanton, Calif., is aiming its Ring at women (by promising a smaller, sleeker wearable). Initially, the Movano ring will collect heart-rate data, respiration, temperature, blood-oxygen saturation, activity, and calorie intake (I’m guessing that last feature involves the app rather than the ring, but will update if I find out otherwise). Movano aims to add blood glucose and blood-pressure monitoring in future versions. Again, no pricing information has been announced. Not showing at CES but sitting on my desk and charging at the moment is the Circul+ Ring from Prevention. It tracks heart rate, temperature, oxygen saturation, blood pressure, and can record an ECG. It’s not as sleek or as decorative (at least not at this point) as its competitors, but it’s promising to monitor more biometrics than the sleeker devices. I plan to put this $300 gadget through its paces over the next few weeks, though algorithms for some of the functions are still being regularly updated. Though it’s a little large for everyday wear, if I had a medical condition (like COVID) that had me concerned, I would make it work. Exoskeletons Surgeons, factory workers, and others whose jobs require long periods of standing will welcome assistive exoskeleton devices like this soon-to-be-introduced model from Yokohama, Japan–based Archelis.Archelis At least two companies are hoping to bring nonpowered exoskeletons to market. Why do so at CES? They aren’t electronic devices per se, yet they were obviously inspired by electrically powered exoskeletons, so I’m calling it fair. I would have been excited to try these out in person, particularly toward the end of a long, tiring day on the show floor. Archelis of Yokohama, Japan, is aiming its exoskeletons at surgeons and factory workers who spend long hours standing. The company says the device “supports the body in a standing position,” reducing fatigue and preventing low-back and leg pain, while allowing users to move around naturally. No pricing has been announced. Also from Japan, Innophys of Tokyo is using CES for the North American introduction of its Muscle Suit Every and Muscle Suit GS-Arm. The company says the two versions of its exoskeleton—one to support the body in a half-sit during lifting and the other to support arms when they need to be raised for an extended period. Both use compressed air, added and released manually, to assist the wearer.
  • Schrödinger’s Tardigrade Claim Incites Pushback
    Dec 31, 2021 07:00 AM PST
    “I don’t like it, and I’m sorry I had anything to do with it,” the physicist Erwin Schrödinger supposedly said of the quantum theory. He was so sorry that he worked to prove it nonsensical with the most famous thought problem in physics, one that involves putting a cat in a box that would fill with poison if a radioactive atom were to split apart spontaneously. According to the theory, that splitting can be said to have happened only if observed; otherwise, it must be deemed indeterminate. And because the cat’s fate is aligned with the atom’s, Schrödinger’s cat must also be considered neither dead nor alive. Patent nonsense, concluded Schrödinger. But later researchers found ways to turn the thought problem into real experiments, and these have actually validated the predictions of quantum theory. One experiment used a resonator chilled nearly to absolute zero so that it became “entangled” across two quantum states, vibrating or not. Those two states were then shown to be superposed. Actually entangling a living creature would be quite a feat for the physicists, perhaps more so for the biochemists. Complex chemical systems don’t normally stand still for inspection, but if you could freeze them quantum-cold you could probe their constituent parts. Some have suggested that biochemical processes, such as photosynthesis, must involve quantum effects; this method could be a way to prove it. A tardigrade is a good candidate for freezing down to zero in a near-total vacuum. It’s about as tough as an animalcule gets. To entangle a life-form you have to put it in an extreme vacuum and cool it nearly to absolute zero without killing it. Bacteria have been so entangled. Now a group of scientists say they’ve entangled a tardigrade, commonly called a water bear, a cute critter that’s just barely visible to the naked eye. The 11 researchers published their work on 15 December in the online preprint server arXiv, which is not peer-reviewed. Among them are Rainer Dumke of the Center for Quantum Technologies, in Singapore, and Tomasz Paterek of the University of Gdansk, in Poland, who in 2019 were honored, so to speak, with an IgNobel Prize for their work on magnetized cockroaches (the results of which bear on methods by which animals navigate). Let the record show that at least one winner of the IgNobel, Andre Geim, went on to win an actual Nobel. He got the IgNobel one for levitating a frog, the real Nobel for discovering graphene. A tardigrade is a good candidate for freezing down to zero in a near-total vacuum. It’s about as tough as an animalcule gets. Insult the thing and it goes dormant by curling up into a ball, called a tun, in a process known as cryptobiosis. Though some have argued that at least some metabolism must still go on, a tun is perhaps best characterized as a life that’s been put on hold. In 2019, when a bunch of tardigrades were deposited on the moon during the very unintended crash-landing of an Israeli spacecraft, many people speculated that the critters would survive even there. Sadly, experiments involving the firing of nylon bullets later suggested that this didn’t happen. Dumke and his colleagues came on their current interest in the course of studying superconducting qubits, electronic oscillators that many hope will produce a fundamentally new computer based on quantum effects. They wondered what would happen if they put a dormant tardigrade on top of one of their qubits, bringing the system to near absolute zero. First, they learned, the tardigrade survived. That alone is a significant finding. “At this very, very low temperature, almost nothing is moving, everything is in the ground state; it’s a piece of dust,” Dunke tells IEEE Spectrum. “Bring it back to conditions where it can survive, increasing the temperature gently, and the pressure, and it comes back. Some had suggested that in the cryptobiological state, some metabolism is going on. Not so.” The presence of two superconducting qubits beside the tardigrade strengthens the case for the existence of entanglement—here it appears the creature is in superposition with one |0> qubit and one |1> qubit. This discovery raises the question of what forces of natural selection might have shaped the tardigrade to be so tough? It seems way overengineered for its normal terrestrial habitats, including moss and lichen. Second, Dumke and his colleagues argue, they achieved true quantum entanglement between the qubit and the tardigrade. Larger objects have been so entangled, but those objects were inanimate matter. This is a bigger claim—and one that’s harder to nail down. “We start with a superconducting qubit at energy state 0, comparable to an atom in the ground state; there’s no oscillation—nothing is happening,” Dumke says. “We can use microwaves to supply exactly the right amount of energy for the right amount of time to raise this to level 1; this is like the second orbital in an atom. It is now oscillating. “Or, and this is the important point, we can add exactly that much energy but supply it for just half the time to raise the system to a quantum state of ½, which is the superposition state. In this state, it is at the same time oscillating and not oscillating. You can do extensive testing to measure all three states.” Then the workers tested the system under a number of different conditions to determine the quantum state, and they found that the system consisting of the qubit and the tardigrade together occupied a lower energy state than either one alone would have occupied. The researchers concluded that the two things had been entangled. No need to wait for peer review; in a matter of days, the criticism began to come in. One critic, Ben Brubaker, a physicist turned journalist, has argued on Twitter that the experiments do not demonstrate what the authors claim. He said there were three possibilities—that quantum entanglement had been achieved with the entire tardigrade, that it had been achieved with a part of it, and that it hadn’t been achieved at all. That last one would imply that any effects were caused by some classical (nonquantum) physical process. The authors admit that they could not perform the perfect experiment, which would involve measuring the tardigrade and the qubit independently, using two probes. Their tardigrade comes packaged with the qubit, forming a hybrid structure, and so two probes are hard to manage. A sketch of the experiment—including a photo of the revived tardigrade on the system’s qubit. arXiv “So you have to construct a model that represents the qubit as a quantum-mechanical system, and if you do it classically you wouldn’t be able to account for all the features,” says Vlatko Vedral, another author, who is a professor of physics at the University of Oxford. “The feature we are talking about is the quantum energy state that the combined system is able to reach. In fact, much of chemistry is based on this kind of thing—the Van der Waals force.” Kai Sheng Lee, of Singapore’s Nanyang Technological University, says that the criticism of the entanglement claim is at least partially answered in the second part of the arXiv paper, “when we introduce the second qubit.” The presence of two superconducting qubits beside the tardigrade strengthens the case for the existence of entanglement, because here it seems the creature is in superposition with one qubit that’s in the 0 state (sometimes abbreviated |0>) and also with the other qubit, which is in the 1 state (a.k.a. |1>). “But the major weakness,” Vedral concedes, “is that there is no direct measurements on the tardigrade alone. This is what you need to do to satisfy even the most conspiratorial critic, the one who says we could explain this with classical arguments.” Can direct measurements of each part in this entanglement triangle ever be made? That question makes Dumke, Vendral, and Lee pause. Finally Dumke takes a stab at it. “You could try to find a particular resonance frequency inside the tardigrade, then use this frequency to find what leads to a stronger entanglement,” he says. “Or maybe you could genetically engineer the tardigrade to resonate,” Vendral suggests. Why the pregnant pause? Maybe they’re thinking about the question. Maybe they’re thinking about how much of their research plan to reveal. Or maybe the two states are superposed. 1 January 2022 Correction: A previous version of this story misspelled Ben Brubaker’s name. Apologies, Mr. Brubaker!
  • Deep Learning Can’t Be Trusted, Brain Modeling Pioneer Says
    Dec 30, 2021 11:00 AM PST
    During the past 20 years, deep learning has come to dominate artificial intelligence research and applications through a series of useful commercial applications. But underneath the dazzle are some deep-rooted problems that threaten the technology’s ascension. The inability of a typical deep learning program to perform well on more than one task, for example, severely limits application of the technology to specific tasks in rigidly controlled environments. More seriously, it has been claimed that deep learning is untrustworthy because it is not explainable—and unsuitable for some applications because it can experience catastrophic forgetting. Said more plainly, if the algorithm does work, it may be impossible to fully understand why. And while the tool is slowly learning a new database, an arbitrary part of its learned memories can suddenly collapse. It might therefore be risky to use deep learning on any life-or-death application, such as a medical one. Now, in a new book, IEEE Fellow Stephen Grossberg argues that an entirely different approach is needed. Conscious Mind, Resonant Brain: How Each Brain Makes a Mind describes an alternative model for both biological and artificial intelligence based on cognitive and neural research Grossberg has been conducting for decades. He calls his model Adaptive Resonance Theory (ART). Grossberg—an endowed professor of cognitive and neural systems, and of mathematics and statistics, psychological and brain sciences, and biomedical engineering at Boston University—based ART on his theories about how the brain processes information. “Our brains learn to recognize and predict objects and events in a changing world that is filled with unexpected events,” he says. Based on that dynamic, ART uses supervised and unsupervised learning methods to solve such problems as pattern recognition and prediction. Algorithms using the theory have been included in large-scale applications such as classifying sonar and radar signals, detecting sleep apnea, recommending movies, and computer-vision-based driver-assistance software. ART can be used with confidence because it is explainable and does not experience catastrophic forgetting, Grossberg says. He adds that ART solves what he has called the stability-plasticity dilemma: How a brain or other learning system can autonomously learn quickly (plasticity) without experiencing catastrophic forgetting (stability). Grossberg, who formulated ART in 1976, is a pioneer in modelling how brains become intelligent. He is the founder and director of Boston University’s Center for Adaptive Systems and the founding director of the Center of Excellence for Learning in Education, Science, and Technology. Both centers have sought to understand how the brain adapts and learns, and to develop technological applications based on their findings. For Grossberg’s “contributions to understanding brain cognition and behavior, and their emulation by technology,” he received the 2017 IEEE Frank Rosenblatt Award, named for the Cornell professor considered by some to be the “father of deep learning.” Grossberg attempts to explain in his nearly 800-page book how “the small lump of meat that we call a brain” gives rise to thoughts, feelings, hopes, sensations, and plans. In particular, he describes biological neural models that attempt to explain how that happens. The book also covers the underlying causes of conditions such as Alzheimer’s disease, autism, amnesia, and post-traumatic stress disorder. “Understanding how brains give rise to minds is also important for designing smart systems in computer science, engineering and tech, including AI and smart robots,” he writes. “Many companies have applied biologically inspired algorithms of the kind that this book summarizes in multiple engineering and technological applications.” The theories in the book, he says, are not only useful for understanding the brain but also can be applied to the design of intelligent systems that are capable of autonomously adapting to a changing world. Taken together, the book describes the fundamental process that enables people to be intelligent, autonomous, and versatile. THE BEAUTY OF ART Grossberg writes that the brain evolved to adapt to new challenges. There is a common set of brain mechanisms that control how humans retain information without forgetting what they have already learned, he says. “We retain stable memories of past experiences, and these sequences of events are stored in our working memories to help predict our future behaviors,” he says. “Humans have the ability to continue to learn throughout their lives, without new learning washing away memories of important information that we learned before.” Understanding how brains give rise to minds is also important for designing smart systems in computer science, engineering, and tech, including AI and smart robots. One of the problems faced by classical AI, he says, is that it often built its models on how the brain might work, using concepts and operations that could be derived from introspection and common sense. “Such an approach assumes that you can introspect internal states of the brain with concepts and words people use to describe objects and actions in their daily lives,” he writes. “It is an appealing approach, but its results were all too often insufficient to build a model of how the biological brain really works.” The problem with today’s AI, he says, is that it tries to imitate the results of brain processing instead of probing the mechanisms that give rise to the results. People’s behaviors adapt to new situations and sensations “on the fly,” Grossberg says, thanks to specialized circuits in the brain. People can learn from new situations, he adds, and unexpected events are integrated into their collected knowledge and expectations about the world. ART’s networks are derived from thought experiments on how people and animals interact with their environment, he adds. “ART circuits emerge as computational solutions of multiple environmental constraints to which humans and other terrestrial animals have successfully adapted….” This fact suggests that ART designs may in some form be embodied in all future autonomous adaptive intelligent devices, whether biological or artificial. “The future of technology and AI will depend increasingly on such self-regulating systems,” Grossberg concludes. “It is already happening with efforts such as designing autonomous cars and airplanes. It’s exciting to think about how much more may be achieved when deeper insights about brain designs are incorporated into highly funded industrial research and applications.”
  • 12 Exciting Engineering Milestones to Look for in 2022
    Dec 30, 2021 08:00 AM PST
    Psyche’s Deep-Space Lasers MCKIBILLO In August, NASA will launch the Psyche mission, sending a deep-space orbiter to a weird metal asteroid orbiting between Mars and Jupiter. While the probe’s main purpose is to study Psyche’s origins, it will also carry an experiment that could inform the future of deep-space communications. The Deep Space Optical Communications (DSOC) experiment will test whether lasers can transmit signals beyond lunar orbit. Optical signals, such as those used in undersea fiber-optic cables, can carry more data than radio signals can, but their use in space has been hampered by difficulties in aiming the beams accurately over long distances. DSOC will use a 4-watt infrared laser with a wavelength of 1,550 nanometers (the same used in many optical fibers) to send optical signals at multiple distances during Psyche’s outward journey to the asteroid. The Great Electric Plane Race MCKIBILLO For the first time in almost a century, the U.S.-based National Aeronautic Association (NAA) will host a cross-country aircraft race. Unlike the national air races of the 1920s, however, the Pulitzer Electric Aircraft Race, scheduled for 19 May, will include only electric-propulsion aircraft. Both fixed-wing craft and helicopters are eligible. The competition will be limited to 25 contestants, and each aircraft must have an onboard pilot. The course will start in Omaha and end four days later in Manteo, N.C., near the site of the Wright brothers’ first flight. The NAA has stated that the goal of the cross-country, multiday race is to force competitors to confront logistical problems that still plague electric aircraft, like range, battery charging, reliability, and speed. 6-Gigahertz Wi-Fi Goes Mainstream MCKIBILLO Wi-Fi is getting a boost with 1,200 megahertz of new spectrum in the 6-gigahertz band, adding a third spectrum band to the more familiar 2.4 GHz and 5 GHz. The new band is called Wi-Fi 6E because it extends Wi-Fi’s capabilities into the 6-GHz band. As a rule, higher radio frequencies have higher data capacity, but a shorter range. With its higher frequencies, 6-GHz Wi-Fi is expected to find use in heavy traffic environments like offices and public hotspots. The Wi-Fi Alliance introduced a Wi-Fi 6E certification program in January 2021, and the first trickle of 6E routers appeared by the end of the year. In 2022, expect to see a bonanza of Wi-Fi 6E–enabled smartphones. 3-Nanometer Chips Arrive MCKIBILLO Taiwan Semiconductor Manufacturing Co. (TSMC) plans to begin producing 3-nanometer semiconductor chips in the second half of 2022. Right now, 5-nm chips are the standard. TSMC will make its 3-nm chips using a tried-and-true semiconductor structure called the FinFET (short for “fin field-effect transistor”). Meanwhile, Samsung and Intel are moving to a different technique for 3 nm called nanosheet. (TSMC is eventually planning to abandon FinFETs.) At one point, TSMC’s sole 3-nm chip customer for 2022 was Apple, for the latter’s iPhone 14, but supply-chain issues have made it less certain that TSMC will be able to produce enough chips—which promise more design flexibility—to fulfill even that order. Seoul Joins the Metaverse MCKIBILLO After Facebook (now Meta) announced it was hell-bent on making the metaverse real, a host of other tech companies followed suit. Definitions differ, but the basic idea of the metaverse involves merging virtual reality and augmented reality with actual reality. Also jumping on the metaverse bandwagon is the government of the South Korean capital, Seoul, which plans to develop a “metaverse platform” by the end of 2022. To build this first public metaverse, Seoul will invest 3.9 billion won (US $3.3 million). The platform will offer public services and cultural events, beginning with the Metaverse 120 Center, a virtual-reality portal for citizens to address concerns that previously required a trip to city hall. Other planned projects include virtual exhibition halls for school courses and a digital representation of Deoksu Palace. The city expects the project to be complete by 2026. IBM’s Condors Take Flight MCKIBILLO In 2022, IBM will debut a new quantum processor—its biggest yet—as a stepping-stone to a 1,000-qubit processor by the end of 2023. This year’s iteration will contain 433 qubits, three times as much as the company’s 127-qubit Eagle processor, which was launched last year. Following the bird theme, the 433- and 1,000-qubit processors will be named Condor. There have been quantum computers with many more qubits; D-Wave Systems, for example, announced a 5,000-qubit computer in 2020. However, D-Wave’s computers are specialized machines for optimization problems. IBM’s Condors aim to be the largest general-purpose quantum processors. New Dark-Matter Detector MCKIBILLO The Forward Search Experiment (FASER) at CERN is slated to switch on in July 2022. The exact date depends on when the Large Hadron Collider is set to renew proton-proton collisions after three years of upgrades and maintenance. FASER will begin a hunt for dark matter and other particles that interact extremely weakly with “normal” matter. CERN, the fundamental physics research center near Geneva, has four main detectors attached to its Large Hadron Collider, but they aren’t well-suited to detecting dark matter. FASER won’t attempt to detect the particles directly; instead, it will search for the more strongly interacting Standard Model particles created when dark matter interacts with something else. The new detector was constructed while the collider was shut down from 2018 to 2021. Located 480 meters “downstream” of the ATLAS detector, FASER will also hunt for neutrinos produced in huge quantities by particle collisions in the LHC loop. The other CERN detectors have so far failed to detect such neutrinos. Pong Turns 50 MCKIBILLO Atari changed the course of video games when it released its first game, Pong, in 1972. While not the first video game—or even the first to be presented in an upright, arcade-style cabinet—Pong was the first to be commercially successful. The game was developed by engineer Allan Alcorn and originally assigned to him as a test after he was hired, before he began working on actual projects. However, executives at Atari saw potential in Pong’s simple game play and decided to develop it into a real product. Unlike the countless video games that came after it, the original Pong did not use any code or microprocessors. Instead, it was built from a television and transistor-transistor logic. The Green Hydrogen Boom MCKIBILLO Utility company Energias de Portugal (EDP), based in Lisbon, is on track to begin operating a 3-megawatt green hydrogen plant in Brazil by the end of the year. Green hydrogen is hydrogen produced in sustainable ways, using solar or wind-powered electrolyzers to split water molecules into hydrogen and oxygen. According to the International Energy Agency, only 0.1 percent of hydrogen is produced this way. The plant will replace an existing coal-fired plant and generate hydrogen—which can be used in fuel cells—using solar photovoltaics. EDP’s roughly US $7.9 million pilot program is just the tip of the green hydrogen iceberg. Enegix Energy has announced plans for a $5.4 billion green hydrogen plant in the same Brazilian state, Ceará, where the EDP plant is being built. The green hydrogen market is predicted to generate a revenue of nearly $10 billion by 2028, according to a November 2021 report by Research Dive. A Permanent Space Station for China MCKIBILLO China is scheduled to complete its Tiangong (“Heavenly Palace”) space station in 2022. The station, China’s first long-term space habitat, was preceded by the Tiangong-1 and Tiangong-2 stations, which orbited from 2011 to 2018 and 2016 to 2019, respectively. The new station’s core module, the Tianhe, was launched in April 2021. A further 10 missions by the end of 2022 will deliver other components and modules, with construction to be completed in orbit. The final station will have two laboratory modules in addition to the core module. Tiangong will orbit at roughly the same altitude as the International Space Station but will be only about one-fifth the mass of the ISS. A Cool Form of Energy Storage MCKIBILLO Cryogenic energy-storage company Highview Power will begin operations at its Carrington plant near Manchester, England, this year. Cryogenic energy storage is a long-term method of storing electricity by cooling air until it liquefies (about –196 °C). Crucially, the air is cooled when electricity is cheaper—at night, for example—and then stored until electricity demand peaks. The liquid air is then allowed to boil back into a gas, which drives a turbine to generate electricity. The 50-megawatt/250-megawatt-hour Carrington plant will be Highview Power’s first commercial plant using its cryogenic storage technology, dubbed CRYOBattery. Highview Power has said it plans to build a similar plant in Vermont, although it has not specified a timeline yet. Carbon-Neutral Cryptocurrency? MCKIBILLO Seattle-based startup Nori is set to offer a cryptocurrency for carbon removal. Nori will mint 500 million tokens of its Ethereum-based currency (called NORI). Individuals and companies can purchase and trade NORI, and eventually exchange any NORI they own for an equal number of carbon credits. Each carbon credit represents a tonne of carbon dioxide that has already been removed from the atmosphere and stored in the ground. When exchanged in this way, a NORI is retired, making it impossible for owners to try to “double count” carbon credits and therefore seem like they’re offsetting more carbon than they actually have. The startup has acknowledged that Ethereum and other blockchain-based technologies consume an enormous amount of energy, so the carbon it sequesters could conceivably originate in cryptocurrency mining. However, 2022 will also see Ethereum scheduled to switch to a much more energy-efficient method of verifying its blockchain, called proof-of-stake, which Nori will take advantage of when it launches.
  • Gravity Batteries, Green Hydrogen, and a Thorium Reactor for China
    Dec 30, 2021 06:00 AM PST
    2021 was a big year for energy-related news, what with the ongoing hunt for new forms of energy storage and cleaner if not carbon-free electricity and events and research that spotlighted the weak links in our power grid. As the pandemic continued to grind on, it was actually comforting to know that smart people in the energy sector were working hard to keep the lights on, advance the technology, and improve people’s lives. IEEE Spectrum did its best to cover those developments, and these were the stories that our readers liked best. Gravity Energy Storage Will Show Its Potential in 2021 Why was this Spectrum’s most popular energy story of the year? Well, let’s think. As power grids everywhere increasingly rely on intermittent renewable energy, batteries and other forms of energy storage that can even out the bumps in supply and demand are taking on a crucial role. No battery is perfect, however, so engineers keep pushing for new and improved ways to store those electrons. The gravity batteries described in this story lift giant weights in the air or up mine shafts to store excess electricity, releasing the weights later on to recover the stored energy. One of the companies featured in the story, Gravitricity, completed its 250-kilowatt gravity-battery demonstrator in Edinburgh last April and is now working on a full-scale deployment at a mine in the Czech Republic. Lithium-Ion Battery Recycling Finally Takes Off in North America and Europe Battery makers around the world are cranking out lithium-ion batteries of various flavors as fast as they can. While lithium isn’t exactly in short supply, extracting it from the ground exacts a huge environmental cost. Thus the recent boom in battery recycling—and readers’ interest in this story on how the industry is expanding beyond China and South Korea and into the United States, Canada, and Western Europe. Last May, one of the story’s featured startups, Canada’s Li-Cycle, announced it would begin recycling the manufacturing scrap from Ultium Cell’s US $2.3 billion EV battery plant—that’s GM’s and LG Chem’s new mega-gigafactory in Lordstown, Ohio. Here’s How We Could Brighten Clouds to Cool the Earth Geoengineering—altering the planet to mitigate the worst effects of climate change—is an idea that has taken on new currency of late. As global temperatures rise, greenhouse gases accumulate, and all signs point to Really Bad Things happening in the coming decades, Spectrum readers are clearly looking for a way out of our current climate predicament. This article, by researchers at the Palo Alto Research Center (PARC) and the University of Washington, is one possible answer. The basic idea is to add particles of sea salt to the atmosphere to brighten clouds and cool the planet. We’ll still have to do the hard work of cutting carbon emissions, but geoengineering could be a way to buy us some time. Solar-to-Hydrogen Tech Sees “Remarkable” Efficiency Jump Another big development in the energy sector is the return of the hydrogen economy. This time around, though, the emphasis is on “green hydrogen”—that is, hydrogen produced using clean energy such as solar or wind power. Most of the world’s hydrogen comes from deeply polluting methods. And most green hydrogen production still relies on electrolyzers, which themselves consume lots of electricity. This story looks at promising research out of Japan’s Shinshu University on light-absorbing materials to split water into hydrogen and oxygen directly—cutting the electrolyzer out of the equation. As the story notes, it will take quite a bit more R&D until this method is “ready for prime-time hydrogen production.” China Says It’s Closing in on Thorium Nuclear Reactor Also getting a second look: nuclear power! While some recent efforts call for radical new reactor designs, this report highlights an old approach with a modern spin. Molten salt nuclear reactors fueled by thorium were first investigated at Oak Ridge National Laboratory in the 1950s. A new molten salt reactor reportedly being built by China follows the Oak Ridge design but also incorporates the same kind of high-temperature salt pumps used in concentrated solar-power plants. What the Texas-Freeze Fiasco Tells Us About the Future of the Grid In this clear-eyed consideration of last winter’s deadly deep freeze in Texas, Robert Hebner, director of the Center for Electromechanics at the University of Texas at Austin, describes the converging factors and troubled history that contributed to the catastrophic blackout. “It seems pretty clear that what happened in Texas was likely preventable with readily accessible and longstanding engineering practices,” Hebner concludes. “But a collective, and likely implicit, judgment was made that the risk to be mitigated was so small that mitigation would not be worth the cost. And nature ‘messed’ with that judgment.” One Atmospheric Nuclear Explosion Could Take Out the Power Grid Another popular story in the “things that are bad for the power grid” category was this piece by national security writer Natasha Bajema. She looked at a recent study out of the U.S. Geological Survey and the University of Colorado on the likely effects of detonating a several-kiloton nuclear weapon in the atmosphere and generating a high-altitude electromagnetic pulse (EMP). (To be fair, Foreign Policy, in a similar 2020 examination, rated the EMP problem as very much outsized and “the last thing you need to worry about in a nuclear explosion.”) The conductivity of the Earth, the Geological Survey scientists discovered, plays an important role in the outcome, with low-conductivity regions most at risk of suffering a “grid-crippling power surge,” as the electric field travels out through high-voltage power lines. Here’s hoping that doesn’t happen in 2022, or any other year. Off-Grid Solar’s Killer App Spectrum contributing editor Peter Fairley traveled to Kenya to report on a boom in agriculture driven by off-grid solar power and efficient solar-powered irrigation pumps. The pumps tap into vast stores of groundwater that lie not too far underground and cover much of sub-Saharan Africa. Solar-irrigation technology, combined with microlending payment plans, lets small farmers boost crop yields, lengthen growing seasons, and neutralize the effects of drought. It’s a win-win-win for a part of the world that could really use a victory right now. How Much Energy Does It Take to Grow a Tomato? Lastly but never leastly, Spectrum columnist and deep thinker Vaclav Smil contemplated the energy footprint of the tomato. Field tomatoes, unsurprisingly, are the least energy-intensive to produce, while raising hydroponic tomatoes grown in greenhouses can consume 60 times as much energy. Food for thought as we close out 2021.
  • China Will Attempt First Carbon-Neutral Winter Olympics
    Dec 29, 2021 12:00 PM PST
    About 160 kilometers northwest of Beijing, the city of Zhangjiakou with its rugged terrain boasts some of the richest wind and solar resources in China. Renewables account for nearly half of the city’s electricity output with less than a third of its full solar and wind potential of 70 gigawatts installed so far. That makes it an ideal cohost with Beijing for the 2022 Winter Olympic and Paralympic Games, which China plans to make the greenest yet. The plan is to power all 26 venues fully with renewables, marking a first in the games’ history. The Beijing 2022 Organising Committee aims to make the games carbon neutral, or as close as possible—a benchmark for the International Olympic Committee’s mission to make the Olympics carbon positive by 2024. Besides being a symbol for President Xi Jinping’s ambitious goal of China being carbon neutral by 2060, the 2022 games should drive sustainable development in the region. The event has already helped Beijing clean up its skies and environment, and has fired up local energy-technology markets. It will also be a global stage to showcase new energy-efficiency, alternate-transport, and refrigeration technologies. The Olympics will account for only a small fraction of the country’s annual electricity consumption. Powering them with clean energy sources won’t be difficult given China’s plentiful renewable capacity, says Michael Davidson, an engineering-systems and global-policy expert at the University of California, San Diego. But Davidson also points out that insufficient infrastructure to manage intermittent renewables and electricity-dispatch practices that don’t prioritize them mean that much of China’s green-power capacity is often not put to use. And because the game venues are connected to a grid that is powered by a variety of sources, asserting that all the electricity used at the games is 100 percent from clean energy sources is “complicated,” he says. Nonetheless, the games will be important in raising the profile of green energy. “The hope is that this process will put into place some institutions that could help leverage a much broader-scale move to green.” The Games will offer a global stage to showcase new energy-efficiency, alternate-transport, and refrigeration technologies. Case in point: The flexible DC grid put into place in Zhiangjiakou in 2020 will let 22.5 billion kilowatt-hours of wind and solar energy flow from Zhiangjiakou to Beijing every year. By the time the Paralympics end in March, the game venues are expected to have consumed about 400 million kWh of electricity. If all of it is indeed provided by renewables, that should reduce carbon emissions by 320,000 tonnes, according to sports outlet Inside the Games. After the athletes go home, the flexible DC grid will continue to clean up around 10 percent of the capital’s immense electricity consumption. Green transport infrastructure being built to shuttle athletes and spectators between venues will also be part of the games’ lasting legacy. A clean energy–powered high-speed railway that takes 47 minutes to travel between Beijing and Zhangjiakou was inaugurated in 2019. More than 85 percent of public-transport vehicles at the Olympics will be powered by batteries, hydrogen fuel cells, or natural gas, according to state media. In August, officials at the Chinese capital revealed a five-year hydrogen-energy plan, with goals to build 37 fueling stations and have about 3,000 fuel-cell vehicles on the road by 2023, for which the Olympics should also be a stepping-stone. Already, hydrogen fueling stations built by China’s petrochemical giant Sinopec, Pennsylvania-based Air Products, and French company Air Liquide have cropped up in Beijing, Zhiangjiakou, and the Yanqing competition zone located in between. In Yanqing alone, 212 fuel-cell buses made by Beijing-based Beiqi Foton Motor Co. will shuttle spectators around. Even the iconic Olympic torch will burn hydrogen for its flame. Even the iconic Olympic torch will burn hydrogen for its flame. The 2022 event will also put a limelight on climate-friendly refrigeration. The immense 12,000-square-meter speed-skating oval in downtown Beijing—8 times the size of a hockey rink—will be the first in the world to use carbon dioxide for making ice. “We’ve built skating rinks with carbon dioxide direct cooling but never a speed-skating oval,” says Wayne Dilk of Toronto-based refrigeration company CIMCO Refrigeration, which has built most of the National Hockey League arenas in North America and designed and provided consulting services for the Olympics’ icy venues. Ice-rink technology typically relies on refrigerants siphoning heat away from brine circulated under the floors, Dilk explains. But CO2-based cooling systems, which are getting more popular mainly in Europe and North America for supermarkets, food-manufacturing plants, and ice rinks, use CO2 both as the refrigerant and for transporting heat away from under the floor where it is pumped in liquid form. CO2 is a climate villain, of course, but conventional hydrofluorocarbon refrigerants are worse. The common R-22 form of Freon, for example, is about 1,800 times as potent as a greenhouse gas. CO2 cooling systems are also 30 percent more energy efficient than Freon, says Dilk. Plus, the CO2 system produces higher-temperature waste heat, which can be used for space heating and hot water. And while the system is more expensive to build because it runs at higher pressure, the temperature across the large surface stays within a range of only 0.5 °C, giving more uniform ice. Consistent temperature and ice quality generate better competitive racing times. The Beijing 2022 hockey arenas and sliding center for bobsled and luge use climate-friendly ammonia or Opteon as refrigerants. Besides being a key part of the greenest Winter Olympics, these state-of-the-art ice venues should seal the deal for another goal China has in 2022: to establish itself as a world-class winter sports and tourism destination. This article appears in the January 2022 print issue as “China’s Green Winter Olympics .”
  • In 1989, General Magic Saw the Future of Smartphones
    Dec 29, 2021 08:00 AM PST
    Sometimes a design is so perfectly representative of its time that to see it brings long-forgotten memories flooding back. The user interface of the Motorola Envoy does that for me, even though I never owned one, or indeed any personal digital assistant. There’s just something about the Envoy’s bitmapped grayscale icons that screams 1990s, a time when we were on the cusp of the Internet boom but didn’t yet realize what that meant. The Motorola Envoy was a paragon of skeuomorphic design Open up the Envoy, and the home screen features a tableau of a typical office circa 1994. On your grayscale desk sits a telephone (a landline, of course), a Rolodex, a notepad, and a calendar. Behind the desk are a wall clock, in- and out-boxes, and a filing cabinet. It’s a masterstroke in skeuomorphic design. Skeuomorphism is a term used by graphical user interface designers to describe GUI objects that mimic their real-world counterparts; click on the telephone to make a call, click on the calendar to make an appointment. In 1994, when the Envoy debuted, the design was so intuitive that many users did not need to consult the user manual to start using their new device. About the size of a paperback and weighing in at 0.77 kilograms (1.7 pounds), the Envoy was a little too big to fit in your pocket. It had a 7.6-by-11.4-centimeter LCD screen, which reviewers at the time noted was not backlit. The device came with 1 megabyte of RAM, 4 MB of ROM, a built-in 4,800-bit-per-second radio modem, a fax and data modem, and an infrared transceiver. The Envoy was one of the first handheld computers designed to run the Magic Cap (short for Communicating Applications Platform) operating system. It used the metaphor of a room to organize applications and help users navigate through the various options. For most business users, the Office with its default desk was the main interface. The user could also navigate to the virtual Hallway—complete with wall art and furniture—and then enter other rooms, including the Game Room, Living Room, Storeroom, and Control Room. Each room featured its own applications. The Motorola Envoy’s graphical user interface was based on skeuomorphic design, in which virtual objects resemble their real-world counterparts and suggest their uses.Cooper Hewitt, Smithsonian Design Museum A control bar across the bottom of the screen aided in navigation. The desk button, the equivalent of a home link, returned the user to the Office. The rubber stamp offered decorative elements, including emoticons, which were then a new concept. The magic lamp gave access to search, print, fax, and mail commands. An icon that looks like a purse, but was described as a tote bag, served as a holding place for copied text that could then be carried to other applications, similar to your computer’s clipboard. The tool caddy invoked drawing and editing options. The keyboard button brought up an onscreen keyboard, an innovation widely copied by later PDAs and smartphones. Skeuomorphic design began to wane in the mid-2000s, as Microsoft, Google, and Apple embraced flat design. A minimalist response to skeuomorphism, flat design prioritized two-dimensional elements and bright colors. Gone were needless animation and 3D effects. Apple’s trash can and Windows’ recycling bin are two skeuomorphic icons that survived. (Envoy had a garbage truck on its toolbar for that purpose.) Part of the shift away from skeuomorphism was purely functional; as devices added more applications and features, designers needed a cleaner display to organize information. And the fast-paced evolution of both physical and digital technologies quickly led to outdated icons. Does anyone still use a Rolodex to store contact information or a floppy disc to save data? As their real-world counterparts became obsolete, the skeuomorphic equivalents looked old-fashioned. The Envoy’s user interface is one of the reasons why the object pictured at top found its way to the collections of the Cooper Hewitt, Smithsonian Design Museum, in New York City. Preserving and displaying the Envoy’s functionality a quarter century after its heyday presented a special challenge. Ben Fino-Radin, founder and lead conservator at Small Data Industries, worked on the digital conservation of the Envoy and wrote an instructive blog post about it. Museums have centuries’ worth of experience preserving physical objects, but capturing the unique 1994 feel of a software design required new technical expertise. Small Data Industries ended up purchasing a second Envoy on eBay in order to deconstruct it, inspect the internal components, and reverse engineer how it worked. How General Magic both failed and succeeded Although the Envoy’s interface is what captured my interest and made me select it for this month’s column, that is not why the Envoy is beloved of computer historians and retro-tech enthusiasts. Rather, it is the company behind the Envoy, General Magic, that continues to fascinate. General Magic is considered a classic example of a Silicon Valley heroic failure. That is, if you define the precursor to the smartphone and a design team whose members later brought us the iPod, iPhone, Android, eBay, Dreamweaver, Apple Watch, and Nest as failures. The story of General Magic begins at Apple in 1989, when Bill Atkinson, Andy Hertzfeld, and Marc Porat, all veterans of the Macintosh development team, started working on the Paradigm project. They tried to convince Apple CEO John Sculley that the next big thing was a marriage of communications and consumer electronics embodied in a handheld device. After about nine months, the team was not finding the support it wanted within Apple, and Porat convinced Sculley to spin it off as an independent company, with Apple maintaining a 10 percent stake. In 1990, General Magic kicked off its operations with an ambitious mission statement: We have a dream of improving the lives of many millions of people by means of small, intimate life support systems that people carry with them everywhere. These systems will help people to organize their lives, to communicate with other people, and to access information of all kinds. They will be simple to use, and come in a wide range of models to fit every budget, need, and taste. They will change the way people live and communicate. Pretty heady stuff. General Magic quickly became the hottest secret in Silicon Valley. The company prized confidentiality and nondisclosure agreements to keep its talent from leaking, but as well-known developers joined the team, anticipation of greatness kept building. General Magic inked partnerships with Sony, Motorola, AT&T, Matsushita, and Philips, each bringing a specific expertise to the table. At its heart, General Magic was attempting to transform personal communications. A competitor to the Motorola Envoy that also used Magic Cap, Sony’s Magic Link, had a phone jack and could connect to the AT&T PersonaLink Service network via a dial-up modem; it also had built-in access to the America Online network. The Envoy, on the other hand, had an antenna to connect to the ARDIS (Advanced Radio Data Information Service) network, the first wireless data network in the United States. Formed in 1983 by Motorola and IBM, ARDIS had sketchy data coverage, its speeds were slow (no more than 19.2 kilobits per second), and costs were high. The Envoy initially sold for US $1,500, but monthly data fees could run $400 or more. Neither the Magic Link nor the Envoy were commercial successes. Rabbits roam free to help spur creativity, personal hygiene seems optional, and pulling all-nighters is the norm. Perhaps it was the hubris before the fall, or maybe the General Magic team truly believed that they were undertaking something historic, but the company allowed documentary filmmaker David Hoffman to record meetings and interview its employees. Filmmakers Sarah Kerruish, Matt Maude, and Michael Stern took this archival treasure trove and turned it into the award-winning 2018 documentary General Magic. The original footage perfectly captures the energy and drive of a 1990s startup. Rabbits roam the office to help spur creativity, personal hygiene seems optional, and pulling all-nighters is the norm. Young engineers invent their own versions of the USB and touch screens in order to realize their dreams. The film also shows a company so caught up in a vision of the future that it fails to see the world changing around it—specifically the emergence of the World Wide Web. As General Magic begins to miss deadlines and its products don’t live up to their hype, the company falters and goes into bankruptcy. But the story doesn’t end there. The cast of characters moves on to other projects that prove far more remarkable than Magic Cap and the Envoy. Tony Fadell, who had joined General Magic right after college, goes on to invent the iPod, coinvent the iPhone, and found Nest (now Google Nest). Kevin Lynch, a star Mac software developer when he joined General Magic, leads the team that develops Dreamweaver (now an Adobe product) and serves as lead engineer on the Apple Watch. Megan Smith, a product design lead at General Magic, later becomes chief technology officer in the Obama administration. Marc Porat had challenged his team to create a product that “once you use it, you won’t be able to live without it.” General Magic fell short of that mark, but it groomed a cadre of engineers and designers who went on to deliver those can’t-live-without-it devices. Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the January 2022 print issue as “Ode to the Envoy.”
  • Ham Radio Jamming, Wireless Industry Battlegrounds, and IoT in Space
    Dec 29, 2021 06:00 AM PST
    Communications bring us all together, and people are always experimenting with new ways to communicate. Despite—or perhaps because of—the global pandemic, 2021 saw plenty of new innovations for communications technologies. 5G has cemented its place in the cellular world, even as the industry looks toward 6G. Companies experimented with new kinds of satellite networks, new ways of building cell towers, and new ways of creating holograms. And even as the pandemic created a remote work world, some governments clamped down on wireless communications. So in case you missed anything, we’ve got you covered. Here are the highlights of what went down in telecom this year: Cuba Jamming Ham Radio? Listen For Yourself Back in July, ham-radio operators in Florida began noticing interference swamping many of the amateur broadcasting bands. After coordination with operators in South America and Europe, the source of the interfering signals—which sound like “the unfortunate offspring of a frog and a Dalek”—was quickly identified as Cuba. At the time, Cubans were protesting in large numbers in response to the government’s handling of the pandemic and other economic woes, and many theorized that the government had cracked down on amateur radio bands as part of a wider response. The jamming seems to have subsided since (you can check for yourself by following the instructions in the original story) but for several days this past summer, it caused a lot of confusion and anxiety in the ham-radio community. How the Huawei Fight is Changing the Face of 5G There was a time when Huawei was ascendant in the wireless world, and the consensus in the industry was that the equipment vendor was the one to beat when it came to 5G. Now…that’s not quite so true. After three years of sanctions by the U.S. government, portions of Huawei’s hold on 5G infrastructure and mobile devices have slipped. The biggest fall came in its smartphone business, where in 2021 alone the company’s revenue dropped by US $30 billion to $40 billion (from a reported US $136.7 billion in 2020). Huawei isn’t down and out yet, however—it’s still one of the largest telecom equipment vendors in the world, and the company still sees plenty of interest for its infrastructure technologies around the world. And there’s 6G to think about; regional battles over the future direction of cellular technologies surely won’t die down anytime soon. Swarm Takes LoRa Sky-High The Internet of Things is still a contentious ecosystem. A handful of different wireless standards—5G for IoT, LoRa, Zigbee—are vying for dominance in the space. Slowly but surely, however, LoRa (short for long-range, low-power) seems to be winning out. In March, satellite startup Swarm, which carries the dubious honor of having conducted the first illegal satellite launch in history, announced it would be using LoRa for its space-based IoT relay network. The company demonstrated that LoRa indeed lived up to its name, as it was able to send signals up to 2,900 kilometers, or roughly the distance between Los Angeles and Chicago. In August, Swarm was acquired by SpaceX, further cementing the company’s—and by extension, LoRa’s—place in an emerging IoT satellite industry. The U.S. Government Finally Gets Serious About IoT Security Elsewhere in the IoT world, the U.S. government passed a sweeping cybersecurity bill called the Internet of Things Cybersecurity Improvement Act of 2020 at the very tail end of that year. The law is a more flexible and adaptable approach to cybersecurity than previous laws. Crucially, it requires the National Institute of Standards and Technology to establish best practices that other government agencies must then follow when purchasing IoT devices. The initial rules unveiled by NIST in 2021 include requiring an over-the-air update option for devices and unique device IDs. And while the law pertains only to devices purchased by the U.S. government, there’s little reason to suspect it won’t have ongoing and broad effects on the IoT industry. Companies will likely include NIST’s cybersecurity requirements in all of its devices, whether selling to the U.S. government or elsewhere. Hologram-in-a-Box Can Teleport You Anywhere PORTL began shipping telephone-booth-size volumetric displays, offering an alternative to conversing with people for those sick of Zoom calls (and who can fork over US $60,000). Volumetric displays are more sophisticated versions of the “holograms” that have popped up in recent years, most noticeably for live concerts in order to controversially create performances by Tupac, Prince, and others. PORTL’s tech instead records a three-dimensional video of a person and transmits it to the person they’re conversing with. The speaker then appears inside PORTL’s booth at the other end, thanks to a combination of an open-cell LCD panel, bright LEDs, and shadows to trick the brain into seeing a two-dimensional image in 3D. PORTL hopes to introduce a smaller mini-PORTL for a fraction of the larger’s price. The Cellular Industry’s Clash Over the Movement to Remake Networks For years, there’s been a simmering resentment in the telecom world between the network operators—companies like AT&T, Deutsche Telekom, and Vodafone—that provide cell service to customers, and the vendors like Ericsson and Nokia from which they buy equipment to build their networks. The resentment stems from the ability of vendors to lock operators into their ecosystems with proprietary technologies and from the high prices that result from creating such captive markets. Recently, however, that resentment has boiled over, and operators are leading a charge to invent new technologies and standards that will see the way in which wireless networks are built drastically change. Bundled up into a movement called Open RAN (for radio access network, the portion of a cell network, like a cell tower, that connects a phone to everything else), the operators have begun forcing vendors to work with them to create open interfaces between components, split software and hardware functions, and develop more AI technologies to manage networks. The goal? Break the hold the big three vendors—Ericsson, Nokia, and Huawei—have over the rest of the industry. Open RAN has seen some roaring successes over the past year. It’s also seen some turmoil. Here’s What 6G Will Be, According to the Creator of Massive MIMO Believe it or not, 6G development has been going on for years already. In fact, we first wrote about it at IEEE Spectrum in 2018. Much of the work is still limited to fundamental research, such as investigating whether terahertz waves could be a good option for a new, high-data-rate spectrum band. Tom Marzetta, formerly of Nokia Bell Labs and currently a professor at New York University’s NYU Wireless research center, is focused on developing something “ten times better than massive MIMO.” MIMO is short for multiple-input, multiple-output, and it’s a type of antenna that, as the name suggests, can easily send and receive multiple signals at once, which increases the overall data throughput of a cell tower or base station. Massive MIMO dials the concept up even more by scaling up the amount of signals an antenna can handle to dozens or even hundreds at a time. Marzetta knows better than anyone else how to improve massive MIMO for the next cellular generation—he invented the technology. His Q&A with IEEE Spectrum is chock-full of insights on what 6G might have in-store for us all. Forget Cryptocurrencies and NFTs—Securing Devices Is the Future of Blockchain Technology Blockchains are currently having a moment, thanks to the attention being paid to cryptocurrencies and non-fungible tokens (NFTs). While the tech has plenty of evangelizers who see this as crypto’s triumphant, crowning moment, there are still plenty of us scratching our heads about what, exactly, any of this is good for. Here’s one option that isn’t getting discussed much, possibly because it’s not as fancy or splashy as crypto: cybersecurity. Earlier this year, the Zigbee Alliance put out a standard by its Project Connected Home over IP (CHIP) working group with the aim of making it easier and safer for IoT devices to communicate with each other. The standard describes a blockchain-based ledger that contains information about each IoT device certified by CHIP, its manufacturer, and other important information like its current software version. Using a blockchain ledger to track device security is a simple way to remove the burden from device owners to monitor potentially dozens of devices themselves. Why Did It Take a Global Pandemic to Trigger the WFH Revolution? Believe it or not, there was actually a time before the global COVID-19 pandemic. And while the pandemic drags through its second year, and many of us grow more comfortable working from home, it will pass eventually. At that time, companies and workers will have to negotiate returns to offices, hybrid work agreements, and remote work situations. (Of course, many companies are already doing this, for better or worse.) But here’s the thing—the technologies for many people to work from home have existed for years, if not decades. When the pandemic first emerged in 2020, many people were able to grab what they needed from their desks, bring it home, and set up shop without interruption. So why did it take the pandemic to create the work-from-home revolution? It’s simple—for the first time, we had no other choice. St. Helena’s New Undersea Cable Will Deliver 18 Gb/s per Person Undaunted by the pandemic, one of the most remote inhabited islands in the world underwent the first stages of a truly tremendous upgrade to its connection to the outside world this year. A spur from Google’s Equiano undersea cable landed on St. Helena, which is located in the South Atlantic, in September. Currently, the island relies on a single satellite dish to maintain a single 40-megabit-per-second link shared among the island’s 4,500 inhabitants. The cable, when it’s lit up in 2022, will flood the island with up to 80 terabits per second of data. If you do the math, as we did in our headline, that comes out to about 18 gigabits per second per person. Seems like overkill, right? As it stands, most of that data won’t be going to the island’s residents. The cost of the cable’s operation is being subsidized by satellite companies. OneWeb is one such company, and it sees the remote island as an ideal place to build ground stations for its satellite network. The island has overcome long odds to be where it is, on the brink of a massive infrastructure upgrade. There’s just one thing still standing in its way: The island’s incumbent telecom monopoly, which despite predatory pricing and failing infrastructure, might just be entrenched enough to turn the cable spur into a cable to nowhere.
  • IEEE Honors Pioneering Engineers
    Dec 28, 2021 11:00 AM PST
    Meet the recipients of the 2022 IEEE medals, service awards, honorary membership, and corporate recognition. The awards are presented on behalf of the IEEE Board of Directors. IEEE MEDAL OF HONOR Sponsor: IEEE Foundation ASAD M. MADNI University of California, Los Angeles “For pioneering contributions to the development and commercialization of innovative sensing and systems technologies, and for distinguished research leadership.” IEEE FRANCES E. ALLEN MEDAL Sponsor: IBM Corecipients: EUGENE W. MYERS Max Planck Institute of Molecular Cell Biology and Genetics and Center for Systems Biology Dresden, Germany WEBB MILLER The Pennsylvania State University, retired State College, Pa. “For pioneering contributions to sequence analysis algorithms and their applications to biosequence search, genome sequencing, and comparative genome analyses.” IEEE ALEXANDER GRAHAM BELL MEDAL Sponsor: Nokia Bell Labs P. R. Kumar Texas A&M University College Station “For seminal contributions to the modeling, analysis, and design of wireless networks.” IEEE MILDRED DRESSELHAUS MEDAL Sponsor: Google ANANTHA CHANDRAKASAN MIT “For contributions to ultralow-power circuits and systems, and for leadership in academia and advancing diversity in the profession.” IEEE EDISON MEDAL Sponsor: Samsung Electronics Co. ALAN BOVIK The University of Texas at Austin "For pioneering high-impact scientific and engineering contributions leading to the perceptually optimized global streaming and sharing of visual media.” IEEE MEDAL FOR ENVIRONMENTAL AND SAFETY TECHNOLOGIES Sponsor: Toyota Motor Corp. Corecipients: SAGAWA MASATO Advanced Magnetic Materials Korat, Thailand JOHN J. CROAT John Croat Consulting, Inc. Naples, Fla. “For contributions to the development of rare earth-iron-boron permanent magnets for use in high-efficiency motors, generators, and other devices.” IEEE FOUNDERS MEDAL Sponsor: IEEE Richard and Mary Jo Stanley Memorial Fund of the IEEE Foundation JOHN BROOKS SLAUGHTER University of Southern California, Los Angeles “For leadership and administration significantly advancing inclusion and racial diversity in the engineering profession across government, academic, and non-profit organizations.” IEEE RICHARD W. HAMMING MEDAL Sponsor: Qualcomm MADHU SUDAN Harvard “For fundamental contributions to probabilistically checkable proofs and list decoding of Reed-Solomon codes.” IEEE MEDAL FOR INNOVATIONS IN HEALTHCARE TECHNOLOGY Sponsor: IEEE Engineering in Medicine and Biology Society JAMES G. FUJIMOTO MIT “For pioneering the development and commercialization of optical coherence tomography for medical imaging and diagnostics.” IEEE THEODORE W. HISSEY OUTSTANDING YOUNG PROFESSIONAL AWARD Sponsor: IEEE Young Professionals, Photonics Society, Power & Energy Society EDHEM (EDDIE) ČUSTOVIĆ La Trobe University Bundoora, Victoria, Australia “For leadership in the empowerment and development of technology professionals globally.” IEEE JACK S. KILBY SIGNAL PROCESSING MEDAL Sponsor: Apple DAVID L. DONOHO Stanford “For groundbreaking contributions to sparse signal recovery and compressed sensing.” IEEE/RSE JAMES CLERK MAXWELL MEDAL Funder: ARM INGO WOLFF IMST GmbH Kamp-Lintfort, Germany “For the development of numerical electromagnetic field analysis techniques to design advanced mobile and satellite communication systems.” IEEE JAMES H. MULLIGAN, JR. EDUCATION MEDAL Sponsor: MathWorks, Pearson, Lockheed Martin Corp., and the IEEE Life Members Fund NED MOHAN University of Minnesota Minneapolis “For leadership in power engineering education by developing courses, textbooks, labs, and a faculty network.” IEEE JUN-ICHI NISHIZAWA MEDAL Sponsor: The Federation of Electric Power Companies, Japan UMESH K. MISHRA University of California, Santa Barbara “For contributions to the development of gallium nitride-based electronics.” IEEE ROBERT N. NOYCE MEDAL Sponsor: Intel Corp. JINGSHENG JASON CONG University of California, Los Angeles “For fundamental contributions to electronic design automation and FPGA design methods.” IEEE DENNIS J. PICARD MEDAL FOR RADAR TECHNOLOGIES AND APPLICATIONS Sponsor: Raytheon Technologies MOENESS G. AMIN Villanova University, Pa. “For contributions to radar signal processing across a wide range of applications including through-the-wall imaging and health monitoring.” IEEE MEDAL IN POWER ENGINEERING Sponsors: IEEE Industry Applications, Industrial Electronics, Power Electronics, and Power & Energy societies THOMAS M. JAHNS University of Wisconsin, Madison “For contributions to the development of high-efficiency permanent magnet machines and drives.” IEEE SIMON RAMO MEDAL Sponsor: Northrop Grumman Corp. PRAVIN VARAIYA University of California, Berkeley “For seminal contributions to the engineering, analysis, and design of complex energy, transportation, and communication systems.” IEEE JOHN VON NEUMANN MEDAL Sponsor: IBM Corp. DEBORAH ESTRIN Cornell “For leadership in mobile and wireless sensing systems technologies and applications, including personal health management.” IEEE CORPORATE INNOVATION AWARD Sponsor: IEEE THE ARGO PROGRAM Woods Hole Oceanographic Institution, Mass. “For innovation in large-scale autonomous observations in oceanography with global impacts in marine and climate science and technology.” IEEE RICHARD M. EMBERSON AWARD Sponsor: IEEE Technical Activities Board FRED MINTZER Blue Gene Watson Supercomputer Center IBM T. J. Watson Research Center, retired Yorktown Heights, N.Y. “For outstanding leadership of technical activities including the IEEE Collabratec and TAB technology-centric communities.” IEEE HARADEN PRATT AWARD Sponsor: IEEE Foundation JOSEPH V. LILLIE BIZPHYX, retired Lafayette, La. “For sustained and outstanding focus on the engagement of volunteers and staff in implementing continuous improvement of IEEE operations.” IEEE HONORARY MEMBERSHIP Sponsor: IEEE CALYAMPUDI RADHAKRISHNA (C.R.) RAO The Pennsylvania State University State College, Pa. University at Buffalo “For contributions to fundamental statistical theories and their applications to engineering and science, particularly in signal processing and communications.” For additional information on the recipients and the awards process, visit the IEEE Awards website.
  • A Robot for the Worst Job in the Warehouse
    Dec 28, 2021 08:00 AM PST
    As COVID-19 stresses global supply chains, the logistics industry is looking to automation to help keep workers safe and boost their efficiency. But there are many warehouse operations that don’t lend themselves to traditional automation—namely, tasks where the inputs and outputs of a process aren’t always well defined and can’t be completely controlled. A new generation of robots with the intelligence and flexibility to handle the kind of variation that people take in stride is entering warehouse environments. A prime example is Stretch, a new robot from Boston Dynamics that can move heavy boxes where they need to go just as fast as an experienced warehouse worker. Stretch’s design is somewhat of a departure from the humanoid and quadrupedal robots that Boston Dynamics is best known for, such as Atlas and Spot. With its single massive arm, a gripper packed with sensors and an array of suction cups, and an omnidirectional mobile base, Stretch can transfer boxes that weigh as much as 50 pounds (23 kilograms) from the back of a truck to a conveyor belt at a rate of 800 boxes per hour. An experienced human worker can move boxes at a similar rate, but not all day long, whereas Stretch can go for 16 hours before recharging. And this kind of work is punishing on the human body, especially when heavy boxes have to be moved from near a trailer’s ceiling or floor. “Truck unloading is one of the hardest jobs in a warehouse, and that's one of the reasons we're starting there with Stretch,” says Kevin Blankespoor, senior vice president of warehouse robotics at Boston Dynamics. Blankespoor explains that Stretch isn’t meant to replace people entirely; the idea is that multiple Stretch robots could make a human worker an order of magnitude more efficient. “Typically, you’ll have two people unloading each truck. Where we want to get with Stretch is to have one person unloading four or five trucks at the same time, using Stretches as tools.” All Stretch needs is to be shown the back of a trailer packed with boxes, and it’ll autonomously go to work, placing each box on a conveyor belt one by one until the trailer is empty. People are still there to make sure that everything goes smoothly, and they can step in if Stretch runs into something that it can’t handle, but their full-time job becomes robot supervision instead of lifting heavy boxes all day. “No one wants to do receiving.” —Matt Beane, UCSB Achieving this level of reliable autonomy with Stretch has taken Boston Dynamics years of work, building on decades of experience developing robots that are strong, fast, and agile. Besides the challenge of building a high-performance robotic arm, the company also had to solve some problems that people find trivial but are difficult for robots, like looking at a wall of closely packed brown boxes and being able to tell where one stops and another begins. Safety is also a focus, says Blankespoor, explaining that Stretch follows the standards for mobile industrial robots set by the American National Standards Institute and the Robotics Industry Association. That the robot operates inside a truck or trailer also helps to keep Stretch safely isolated from people working nearby, and at least for now, the trailer opening is fenced off while the robot is inside. Stretch is optimized for moving boxes, a task that’s required throughout a warehouse. Boston Dynamics hopes that over the longer term the robot will be flexible enough to put its box-moving expertise to use wherever it’s needed. In addition to unloading trucks, Stretch has the potential to unload boxes from pallets, put boxes on shelves, build orders out of multiple boxes from different places in a warehouse, and ultimately load boxes onto trucks, a much more difficult problem than unloading due to the planning and precision required. “Where we want to get with Stretch is to have one person unloading four or five trucks at the same time.” —Kevin Blankespoor, Boston Dynamics In the short term, unloading a trailer (part of a warehouse job called “receiving”) is the best place for a robot like Stretch, agrees Matt Beane, who studies work involving robotics and AI at the University of California, Santa Barbara. “No one wants to do receiving,” he says. “It’s dangerous, tiring, and monotonous.” But Beane, who for the last two years has led a team of field researchers in a nationwide study of automation in warehousing, points out that there may be important nuances to the job that a robot such as Stretch will probably miss, like interacting with the people who are working other parts of the receiving process. “There's subtle, high-bandwidth information being exchanged about boxes that humans down the line use as key inputs to do their job effectively, and I will be singularly impressed if Stretch can match that.” Boston Dynamics spent much of 2021 turning Stretch from a prototype, built largely from pieces designed for Atlas and Spot, into a production-ready system that will begin shipping to a select group of customers in 2022, with broader sales expected in 2023. For Blankespoor, that milestone will represent just the beginning. He feels that such robots are poised to have an enormous impact on the logistics industry. “Despite the success of automation in manufacturing, warehouses are still almost entirely manually operated—we’re just starting to see a new generation of robots that can handle the variation you see in a warehouse, and that’s what we’re excited about with Stretch.”

Engineering on Twitter