Engineering Community Portal

MERLOT Engineering
Share

Welcome  – From the Editor

Welcome to the Engineering Portal on MERLOT. Here, you will find lots of resources on a wide variety of topics ranging from aerospace engineering to petroleum engineering to help you with your teaching and research.

As you scroll this page, you will find many Engineering resources.  This includes the most recently added Engineering material and members; journals and publications and Engineering education alerts and twitter feeds.  

Showcase

Over 150 emeddable or downloadable 3D Simulations in the subject area of Automation, Electro/Mechanical, Process Control, and Renewable Energy.  Short 3-7 minute simulations that cover a range of engineering topics to help students understand conceptual engineering topics. 

Each video is hosted on Vimeo and can be played, embedded, or downloaded for use in the classroom or online.  Other option includes an embeddable HTML player created in Storyline with review questions for each simulation that reinforce the concepts learned. 

Made possible under a Department of Labor grant.  Extensive storyboard/ scripting work with instructors and industry experts to ensure content is accurate and up to date. 

Engineering Technology 3D Simulations in MERLOT

New Materials

New Members

Engineering on the Web

  • Engineering dean hails innovative, sustainable Tesla Semi | University of Nevada, Reno
    Dec 02, 2022 12:34 PM PST
  • Clarkson University Senior Aerospace Engineering Class Gets Tour of Beta Technologies
    Dec 02, 2022 12:13 PM PST
  • Quanta secures engineering support framework extensions - World Pipelines
    Dec 02, 2022 12:00 PM PST
  • Switzerland fines engineering giant R75m over South African bribery | Business - News24
    Dec 02, 2022 11:09 AM PST
  • Video Friday: Humanoid Soccer
    Dec 02, 2022 09:47 AM PST
    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND ICRA 2023: 29 May–2 June 2023, LONDON Enjoy today’s videos! The videos shows scenes from the RoboCup 2022 Humanoid AdultSize competition in Bangkok, Thailand. The robots of Team NimbRo of University of Bonn, Germany, won the main soccer tournament, the Drop-in tournament, and the Technical Challenges. Consequently, NimbRo came in first in the overall Best-Humanoid ranking. [ NimbRo ] Have you ever seen a robot dancing? One of the highlights of the 20th anniversary event of Robotnik was the choreography between the professional ballet dancer Sherezade Soriano and the mobile manipulator robot RB-KAIROS+. [ Robotnik ] This video celebrates the 10-year anniversary of the University of Zurich’s Robotics and Perception Group, led by Prof. Davide Scaramuzza. The lab was founded in 2012. More than 300 people worked in our lab as Bsc/Msc/Ph.D. students, postdocs, and visiting researchers. We thank all of them for contributing to our research. The lab made important contributions to autonomous, agile vision-based navigation of micro aerial vehicles and event cameras for mobile robotics and computer vision. Ten years, so much accomplished, and they’re just getting started! [ UZH RPG ] Printed fiducial markers are inexpensive, easy to deploy, robust and deservedly popular. However, their data payload is also static, unable to express any state beyond being present. Our “DynaTags” are simple mechanisms constructed from paper that express multiple payloads, allowing practitioners and researchers to create new and compelling physical-digital experiences. [ CMU FIG ] CNN’s “Tech for Good” hears from Dr. Marko Bjelonic, from ETH Zürich’s Robotic Systems Lab and founder of the Swiss Mile robot, who believes automated machines are the key to automating our cities. His four-legged and wheeled robot is able to change shape within seconds, overcome steps and navigate between indoor and outdoor environments. It’s hoped the bot, which can travel up to 20 kilometres an hour and carry 50 kilograms, has the potential to serve as a member of search-and-rescue teams in the future. [ Swiss-Mile ] Thanks, Marko! Be the tiny DIY robot cat you’ve always wanted to be! All of this is open source, and you can get it running on your own Nybble (which makes a great holiday gift!) at the link below. [ Petoi ] Thanks, Rz! In his dissertation “Autonomous Operation of a Reconfigurable Multi-Robot System for Planetary Space Missions,” Dr. Thomas Röhr deals with heterogeneous robot teams whose individual agents can also join to form more capable agents due to their modular structure. This video highlights an experiment, that shows the feasibility and the potential of the autonomous use of reconfigurable systems for planetary exploration missions. The experiments feature the autonomous execution of an action sequence for multi-robot cooperation for soil sampling and handover of a payload containing the soil sample. [ DFKI ] Thanks, Thomas! Haru has had a busy year! [ Haru Fest ] Thanks, Randy! This is really pretty impressive for remote operation, but it’s hard to tell how much is the capability of the system, and how much is the skill and experience of the operator. [ Sanctuary AI ] Cargo drones are designed to carry payloads with predefined shape, size, and/or mass. This lack of flexibility requires a fleet of diverse drones tailored to specific cargo dimensions. Here we propose a new reconfigurable drone based on a modular design that adapts to different cargo shapes, sizes, and mass. [ Paper ] Building tiny giant robots requires lots of little fixtures, and I’m here for it. [ Gundam Factory ] The load-bearing assessment that’s part of this research is particularly cool. [ DFKI ] The Utah Bionic Leg, developed by University of Utah mechanical engineering associate professor Tommaso Lenzi and his team in the HGN Lab, is a motorized prosthetic for lower-limb amputees. The leg uses motors, processors, and advanced artificial intelligence that all work together to give amputees more power to walk, stand-up, sit-down, and ascend and descend stairs and ramps. [ Utah Engineering ] PLEN is all ready for the world cup. [ PLEN ] The Misty platform supports multiple programming languages, including Blockly and Python, making it the perfect programming and robotics learning tool for students of all ages. [ Misty ] Sarcos Technology and Robotics Corporation designs, develops, and manufactures a broad range of advanced mobile robotic systems that redefine human possibilities and are designed to enable the safest most productive workforce in the world. Sarcos robotic systems operate in challenging, unstructured, industrial environments and include teleoperated robotic systems, a powered robotic exoskeleton, and software solutions that enable task autonomy. [ Sarcos ] Teaser for the NCCR Robotics documentary coming in late 2022. [ NCCR Robotics ] A robotic feeding system must be able to acquire a variety of foods. We propose a general bimanual scooping primitive and an adaptive stabilization strategy that enables successful acquisition of a diverse set of food geometries and physical properties. Our approach, CARBS: Coordinated Acquisition with Reactive Bimanual Scooping, learns to stabilize without impeding task progress by identifying high-risk foods and robustly scooping them using closed-loop visual feedback. [ Paper ] Join Jonathan Gammell with our guest speaker Dr. Larry Matthies, NASA JPL, discussing In situ mobility for Planetary Exploration in our third seminar of our Anniversary series. [ ORI ]
  • Avionics market to grow 4.99% to 2030, study predicts - Military Embedded Systems
    Dec 02, 2022 09:21 AM PST
  • Distinguished Professor Philippe Sautet Appointed Levi James Knight, Jr. Chair for Excellence
    Dec 02, 2022 08:51 AM PST
  • Open forums for College of Engineering dean candidates - News - Illinois State
    Dec 02, 2022 07:58 AM PST
  • Global Avionics Market to Hit Sales of $98.41 Billion By 2028 - GlobeNewswire
    Dec 02, 2022 07:54 AM PST
  • Collaboration in Science and Engineering wins research journal award
    Dec 02, 2022 07:40 AM PST
  • Environmental Engineering students host local elementary school students - The Lafayette
    Dec 02, 2022 07:30 AM PST
  • Global Military Avionics Market Research Report 2022-2030 with Focus on USA, India ...
    Dec 02, 2022 07:18 AM PST
  • Peerless Food Equipment to Join Coperion | Food Engineering
    Dec 02, 2022 07:12 AM PST
  • Lockheed Martin and Intel Demonstrate 5G Capabilities For Military Aircraft Use
    Dec 02, 2022 05:43 AM PST
  • 8. Bringing Engineering Pedagogy to Life - ASEE
    Dec 02, 2022 05:24 AM PST
  • In Memoriam: Emeritus Engineering Professor Peter Luh - UConn Today
    Dec 02, 2022 04:36 AM PST
  • Jacobs Awarded $92.5M NASA Architecture, Engineering Services Contract for ... - GovCon Wire
    Dec 02, 2022 03:56 AM PST
  • Boeing to build two new KC-46A aerial refueling aircraft and avionics for Japan in $398.2 ...
    Dec 02, 2022 02:34 AM PST
  • Empire Screen's Decal Sheets Advance Lean Manufacturing Practices - WhatTheyThink
    Dec 02, 2022 01:38 AM PST
  • KMC commanders meet; talk energy conservation > Ramstein Air Base > Article Display
    Dec 02, 2022 01:27 AM PST
  • Swiss engineering group ABB fined $4.3 million | WTVB | 1590 AM · 95.5 FM
    Dec 02, 2022 12:40 AM PST
  • IEEE President’s Note: Looking to 2050 and Beyond
    Dec 01, 2022 11:00 AM PST
    What will the future of the world look like? Everything in the world evolves. Therefore, IEEE also must evolve, not only to survive but to thrive. How will people build communities and engage with one another and with IEEE in the future? How will knowledge be acquired? How will content be curated, shared, and accessed? What issues will influence the development of technical standards? How should IEEE be organized to be most impactful? While no one has a crystal ball, predictions can be made based on evidence and trends. To start the conversation around these questions, I appointed the 2022 IEEE Ad Hoc Committee on IEEE in 2050. The committee chaired by IEEE Fellow Roger Fujii, is designed to envision scenarios looking out to the year 2050 and beyond to gain a global perspective of what the world may look like and what potential futures might mean for IEEE. The committee explored plausible scenarios across IEEE’s range of interests and scanned for drivers of change within existing and emerging technology fields. It also analyzed the role IEEE should take based on the identified potential futures and discussed next steps in IEEE’s major areas of focus, including conferences, education, publications, standards, membership, sustainability, and governance. For example, imagine that in 2050, your “cognitive digital twin” is constantly surfing the large volumes of research papers and data stored across open access repositories to find information directly relevant to your interests. It will also use its imaginative and creative logic to suggest new concepts and solutions for you. This platform will be driven by artificial intelligence, augmented reality, and virtual reality developed to help make you more productive and creative in your career. What is IEEE’s role in this new environment? Molding the IEEE of the future As a global organization, a considerable challenge of IEEE is that it supports a broad community. This also presents a great opportunity to learn and pilot various models, services, products, and solutions to meet the diverse set of members’ needs predicted for 2050 and beyond. The technology generation of 2050 will likely be interested in solving mission-based issues such as climate change, universal access to health care, sustainable food sources, and ubiquitous energy generation and transmission. Thus, IEEE’s mission—to advance technology for humanity—will still be relevant in the future. However, the way IEEE achieves its mission must and will change. The future is multinodular and digital. IEEE will benefit from its status and reputation as a trusted, neutral provider of content and information. As a knowledge provider, IEEE has an opportunity to curate and deliver information to assist its constituents and the public in understanding the benefits and risks associated with several technology areas. Most importantly, IEEE can help with the deep integration of artificial intelligence and virtual reality into a wide variety of everyday applications. Follow me on social media for updates and information from across IEEE Twitter: @IeeePresident Facebook: @ieeepresident Instagram: @ieeepresident LinkedIn: https://www.linkedin.com/showcase/ieeepresident Adapting to an environment of constant chaos and change is essential moving forward. The ebb and flow of geopolitical tensions are likely to continue to increase—which will impact global organizations like ours. IEEE must become exceedingly nimble to address rapid changes in technologies and interdisciplinary needs, and attract a broader audience. IEEE will also need to rapidly respond to selected strategic changes and allocate funding for new approaches. IEEE’s governance structure will need to be streamlined to meet the needs of many future scenarios that will require the organization to empower local entities to make decisions within their area. Trust in IEEE must remain high if the organization is to maintain relevance and remain a credible source of information in the future. Now is the time for the organization to be thoughtful and bold, and take risks. IEEE cannot be afraid to break silos. Some activities will need to be terminated to make space for new ones. Products and initiatives should be evaluated regularly, and decisions must be made on a continuous basis. Sound scary? Compounded by global warming, uneven demographic growth, and geopolitical challenges, the future likely is more uncertain than we realize. But often a crisis can be transformed into opportunity if we honestly face the unexpected and become prepared for whatever lies ahead. Adhering to IEEE’s core principles—trust, growth and nurturing, global community building, partnership, service to humanity, and integrity in action—will serve the organization well into the future. I sincerely thank Roger Fujii and the ad hoc committee members for their efforts. Their work will aid IEEE in devising long-term strategies to prepare for the future, to adapt, and to convert uncertainty into opportunity. As I have shared, in an ever-changing and uncertain world, IEEE—your professional home—is always here for you, our members, as well as for humanity, and for our shared future. After all, serving our members well is our raison d’être. By addressing the challenges and opportunities that lie ahead of us, IEEE can remain a vibrant organization with relevance both now and well into 2050. If IEEE remains true to its central values—fostering technological innovation and excellence for the benefit of humanity—I’m certain that the organization’s future will be very bright indeed. It has been my honor and privilege to work with and for you this year as IEEE president and CEO. —K.J. RAY LIU IEEE president and CEO Please share your thoughts with me at president@ieee.org. This article appears in the December 2022 print issue as “Looking to 2050 and Beyond.”
  • The Device That Changed Everything
    Dec 01, 2022 06:47 AM PST
    I was roaming around the IEEE Spectrum office a couple of months ago, looking at the display cases the IEEE History Center has installed in the corridor that runs along the conference rooms at 3 Park. They feature photos of illustrious engineers, plaques for IEEE milestones, and a handful of vintage electronics and memorabilia including an original Sony Walkman, an Edison Mazda lightbulb, and an RCA Radiotron vacuum tube. And, to my utter surprise and delight, a replica of the first point-contact transistor invented by John Bardeen, Walter Brittain, and William Shockley 75 years ago this month. I dashed over to our photography director, Randi Klett, and startled her with my excitement, which, when she saw my discovery, she understood: We needed a picture of that replica, which she expertly shot and now accompanies this column. What amazed me most besides the fact that the very thing this issue is devoted to was here with us? I’d passed by it countless times and never noticed it, even though it is tens of billions times the size of an ordinary transistor today. In fact, each of us is surrounded by billions, if not trillions of transistors, none of which are visible to the naked eye. It is a testament to imagination and ingenuity of three generations of electronics engineers who took the (by today’s standards) mammoth point-contact transistor and shrunk it down to the point where transistors are so ubiquitous that civilization as we know it would not exist without them. Of course, this wouldn’t be a Spectrum special issue if we didn’t tell you how the original point-contact transistor worked, something that even the inventors seemed a little fuzzy on. According to our editorial director for content development, Glenn Zorpette, the best explanation of the point-contact transistor is in Bardeen’s 1956 Nobel Prize lecture, but even that left out important details, which Zorpette explores in classic Spectrum style in “How the First Transistor Worked” on page 24. The best explanation of the point-contact transistor is in Bardeen’s 1956 Nobel Prize lecture, but even that left out important details. And while we’re celebrating this historic accomplishment, Senior Editor Samuel K. Moore, who covers semiconductors for Spectrum and curated this special issue, looks at what the transistor might be like when it turns 100. For “The Transistor of 2047,” Moore talked to the leading lights of semiconductor engineering, many of them IEEE Fellows, to get a glimpse of a future where transistors are stacked on top of each other and are made of increasingly exotic 2D materials, even as the OG of transistor materials, germanium, is poised for a comeback in the near term. When I was talking to Moore a few weeks ago about this issue, he mentioned that he’s attending his favorite conference just as this issue comes out, the 68th edition of IEEE’s Electron Devices Meeting in San Francisco. The mind-bending advances that emerge from that conference always get him excited about the engineering feats occurring in today’s labs and on tomorrow’s production lines. This year he’s most excited about new devices that combine computing capability with memory to speed machine learning. Who knows, maybe the transistor of 2047 will make its debut there, too. This article appears in the December 2022 print issue.
  • Paying Tribute to 1997 IEEE President Charles K. Alexander
    Nov 30, 2022 11:00 AM PST
    Charles K. Alexander, 1997 IEEE president, died on 17 October at the age of 79. The active volunteer held many high-level positions throughout the organization, including 1991–1992 IEEE Region 2 director. He was also the 1993 vice president of the IEEE United States Activities Board (now IEEE-USA). The IEEE Life Fellow worked in academia his entire career. At the time of his death, he was a professor of electrical and computer engineering at Cleveland State University and served as dean of its engineering school. He was a former professor and dean at several schools including Temple University, California State University, Northridge, and Ohio University. He also was a consultant to companies and government agencies, and he was involved in research and development projects in solar energy and software engineering. Alexander was dedicated to making IEEE more meaningful and helpful to engineering students. He helped found the IEEE Student Professional Awareness program, which offers talks and networking events. Alexander also helped found IEEE’s student publication IEEE Potentials. He mentored many students. “My life has been so positively impacted with the significant opportunity to know such a giant in the engineering world,” says Jim Watson, an IEEE senior life member and one of Alexander’s mentees. “While many are very successful engineers and instructors, Dr. Alexander rises far above those who contributed to the success of others.” Helping engineering students succeed Alexander was born in Amherst, Ohio, where he became interested in mechanical engineering at a young age. He fixed the cars and machines used on his family’s farm, according to a 2009 oral history conducted by the IEEE History Center. He switched his interests and then earned a bachelor’s degree in electrical engineering in 1965 from Ohio Normal (now Ohio Northern University), in Ada. As a freshman, he joined the American Institute of Electrical Engineers, one of IEEE’s predecessor societies. While he was an undergraduate, he served as secretary of the school’s AIEE student branch. Alexander went on to receive master’s and doctoral degrees in electrical engineering from Ohio University in Athens, in 1967 and 1971 respectively. As a graduate student, he advised the university’s Eta Kappa Nu chapter, the engineering honor society that is now IEEE’s honor society. He significantly increased meeting attendance, he said in the oral history. Thanks to his efforts, he said, the chapter was ranked one of the top four in the country at the time. After graduating, he joined Ohio University in 1971 as an assistant professor of electrical engineering. During this time, he also worked as a consultant for the U.S. Air Force and Navy, designing manufacturing processes for their various new systems. Alexander also designed a testing system for solid-state filters, which were used in atomic warheads for missiles on aircraft carriers. He left a year later to join Youngstown State University, in Ohio, as an associate professor of electrical engineering. He was faculty advisor for the university’s IEEE student branch and helped increase its membership from 20 students to more than 200, according to the oral history. In 1980 he moved to Tennessee and became a professor of electrical engineering at Tennessee Tech University, in Cookeville. He also helped the school’s IEEE student branch boost its membership. In 1986 he joined Temple University in Philadelphia as a professor and chair of the electrical engineering department. At the time, the university did not have an accredited engineering program, he said in the oral history. “They brought me on board to help get the undergraduate programs in all three disciplines accredited,” he said. He also created master’s degree and Ph.D. programs for electrical engineering. He served as acting dean of the university’s college of engineering from 1989 to 1994. After the engineering programs became accredited, Alexander said in the oral history that his job was done there so he left Temple in 1994 to join California State University, Northridge. He was dean of engineering and computer science there. Alexander returned to Ohio University as a visiting professor of electrical engineering and computer science. From 1998 to 2002, he was interim director of the school’s Institute for Corrosion and Multiphase Technology. The institute’s researchers predict and resolve corrosion in oil and gas production and transportation infrastructure. But after a few years, Alexander said, he missed creating and growing engineering programs at universities, so when an opportunity opened up at Cleveland State University in 2007, he took it. As dean of the university’s engineering school, he added 12 faculty positions. Supporting student members’ professional development Throughout his career, Alexander was an active IEEE volunteer. He served as chair of the IEEE Student Activities Committee, where he helped launch programs and services that are still being offered today. They include the IEEE Student Professional Awareness Program and the WriteTalk program (now ProSkills), which helps students develop their communication skills. He was editor of the IEEE Transactions on Education. Along with IEEE Senior Member Jon R. McDearman, he helped launch IEEE Potentials. “Potentials was designed to be something of value for the undergraduates, who don’t want to read technical papers,” Alexander said in the oral history. “We styled it after IEEE Spectrum. Jon and I decided to include articles that would help students on topics like career development and how to be successful.” Alexander continued to rise through the ranks in IEEE and was elected the 1991–1992 Region 2 director. The following year, he became vice president of the IEEE United States Activities Board (now IEEE-USA) and served in that position for two years. He was elevated to IEEE Fellow in 1994 “for leadership in the field of engineering education and the professional development of engineering students.” He was elected as the 1997 IEEE president. “It was an incredible honor,” he said in the oral history. “One of the very special things that has happened to me.” He received the 1984 IEEE Centennial Medal as well as several awards for his work in education, including a 1998 Distinguished Engineering Education Achievement Award and a 1996 Distinguished Engineering Education Leadership Award, both from the Engineering Council, the United Kingdom’s regulatory body for the profession. “Dr. Alexander always emphasized the value of developing professional and ethical skills to enhance engineering career success,” Watson says. “He encouraged others to apply Winston Churchill’s famous quote ‘We make a living by what we get but we make a life by what we give.’” To share your condolences or memories of Alexander, use the commenting form below.
  • Robot Learns Human Trick for Not Falling Over
    Nov 30, 2022 10:56 AM PST
    This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore. Humanoid robots are a lot more capable than they used to be, but for most of them, falling over is still borderline catastrophic. Understandably, the focus has been on getting humanoid robots to succeed at things as opposed to getting robots to tolerate (or recover from) failing at things, but sometimes, failure is inevitable because stuff happens that’s outside your control. Earthquakes, accidentally clumsy grad students, tornadoes, deliberately malicious grad students—the list goes on. When humans lose their balance, the go-to strategy is a highly effective one: Use whatever happens to be nearby to keep from falling over. While for humans this approach is instinctive, it’s a hard problem for robots, involving perception, semantic understanding, motion planning, and careful force control, all executed under aggressive time constraints. In a paper published earlier this year in IEEE Robotics and Automation Letters, researchers at Inria in France show some early work getting a TALOS humanoid robot to use a nearby wall to successfully keep itself from taking a tumble. The tricky thing about this technique is how little time a robot has to understand that it’s going to fall, sense its surroundings, make a plan to save itself, and execute that plan in time to avoid falling. In this paper, the researchers address most of these things—the biggest caveat is probably that they’re assuming that the location of the nearby wall is known, but that’s a relatively straightforward problem to solve if your robot has the right sensors on it. Once the robot detects that something in its leg has given out, its Damage Reflex (D-Reflex) kicks in. D-Reflex is based around a neural network that was trained in simulation (taking a mere 882,000 simulated trials), and with the posture of the robot and the location of the wall as inputs, the network outputs how likely a potential wall contact is to stabilize the robot, taking just a few milliseconds. The system doesn’t actually need to know anything specific about the robot’s injury, and will work whether the actuator is locked up, moving freely but not controllably, or completely absent—the “amputation” case. Of course, reality rarely matches simulation, and it turns out that a damaged and tipping-over robot doesn’t reliably make contact with the the wall exactly where it should, so the researchers had to tweak things to make sure that the robot stops its hand as soon as it touches the wall whether it’s in the right spot or not. This method worked pretty well—using D-Reflex, the TALOS robot was able to avoid falling in three out of four trials where it would otherwise have fallen. Considering how expensive robots like TALOS are, this is a pretty great result, if you ask me. The obvious question at this point is, “Okay, now what?” Well, that’s beyond the scope of this research, but generally “now what” consists of one of two things. Either the robot falls anyway, which can definitely happen even with this method because some configurations of robot and wall are simply not avoidable, or the robot doesn’t fall and you end up with a slightly busted robot leaning precariously against a wall. In either case, though, there are options. We’ve seen a bunch of complementary work on surviving falls with humanoid robots in one way or another. And in fact one of the authors of this paper, Jean-Baptiste Mouret, has already published some very cool research on injury adaptation for legged robots. In the future, the idea is to extend this idea to robots that are moving dynamically, which is definitely going to be a lot more challenging, but potentially a lot more useful. First, do not fall: learning to exploit a wall with a damaged humanoid robot, by Timothee Anne, Eloïse Dalin, Ivan Bergonzani, Serena Ivaldi, and Jean-Baptiste Mouret from Inria, is published in IEEE Robotics and Automation Letters.
  • John Bardeen’s Terrific Transistorized Music Box
    Nov 30, 2022 08:00 AM PST
    On 16 December 1947, after months of work and refinement, the Bell Labs physicists John Bardeen and Walter Brattain completed their critical experiment proving the effectiveness of the point-contact transistor. Six months later, Bell Labs gave a demonstration to officials from the U.S. military, who chose not to classify the technology because of its potentially broad applications. The following week, news of the transistor was released to the press. The New York Herald Tribune predicted that it would cause a revolution in the electronics industry. It did. How John Bardeen got his music box This article is part of our special report on the 75th anniversary of the invention of the transistor. In 1949 an engineer at Bell Labs built three music boxes to show off the new transistors. Each Transistor Oscillator-Amplifier Box contained an oscillator-amplifier circuit and two point-contact transistors powered by a B-type battery. It electronically produced five distinct tones, although the sounds were not exactly melodious delights to the ear. The box’s design was a simple LC circuit, consisting of a capacitor and an inductor. The capacitance was selectable using the switch bank, which Bardeen “played” when he demonstrated the box. John Bardeen, co-inventor of the point-contact transistor, liked to play the tune “How Dry I Am” on his music box. The Spurlock Museum/University of Illinois at Urbana-Champaign Bell Labs used one of the boxes to demonstrate the transistor’s portability. In early demonstrations, the instantaneous response of the circuits wowed witnesses, who were accustomed to having to wait for vacuum tubes to warm up. The other two music boxes went to Bardeen and Brattain. Only Bardeen’s survives. Bardeen brought his box to the University of Illinois at Urbana-Champaign, when he joined the faculty in 1951. Despite his groundbreaking work at Bell Labs, he was relieved to move. Shortly after the invention of the transistor, Bardeen’s work environment began to deteriorate. William Shockley, Bardeen’s notoriously difficult boss, prevented him from further involvement in transistors, and Bell Labs refused to allow Bardeen to set up another research group that focused on theory. Frederick Seitz recruited Bardeen to Illinois with a joint appointment in electrical engineering and physics, and he spent the rest of his career there. Although Bardeen earned a reputation as an unexceptional instructor—an opinion his student Nick Holonyak Jr. would argue was unwarranted—he often got a laugh from students when he used the music box to play the Prohibition-era song “How Dry I Am.” He had a key to the sequence of notes taped to the top of the box. In 1956, Bardeen, Brattain, and Shockley shared the Nobel Prize in Physics for their “research on semiconductors and their discovery of the transistor effect.” That same year, Bardeen collaborated with postdoc Leon Cooper and grad student J. Robert Schrieffer on the work that led to their April 1957 publication in Physical Review of “Microscopic Theory of Superconductivity.” The trio won a Nobel Prize in 1972 for the development of the BCS model of superconductivity (named after their initials). Bardeen was the first person to win two Nobels in the same field and remains the only double laureate in physics. He died in 1991. Overcoming the “inherent vice” of Bardeen’s music box Curators at the Smithsonian Institution expressed interest in the box, but Bardeen instead offered it on a long-term loan to the World Heritage Museum (predecessor to the Spurlock Museum) at the University of Illinois. That way he could still occasionally borrow it for use in a demonstration. In general, though, museums frown upon allowing donors—or really anyone—to operate objects in their collections. It’s a sensible policy. After all, the purpose of preserving objects in a museum is so that future generations have access to them, and any additional use can cause deterioration or damage. (Rest assured, once the music box became part of the accessioned collections after Bardeen’s death, few people were allowed to handle it other than for approved research.) But musical instruments, and by extension music boxes, are functional objects: Much of their value comes from the sound they produce. So curators have to strike a balance between use and preservation. As it happens, Bardeen’s music box worked up until the 1990s. That’s when “inherent vice” set in. In the lexicon of museum practice, inherent vice refers to the natural tendency for certain materials to decay despite preservation specialists’ best attempts to store the items at the ideal temperature, humidity, and light levels. Nitrate film, highly acidic paper, and natural rubber are classic examples. Some objects decay quickly because the mixture of materials in them creates unstable chemical reactions. Inherent vice is a headache for any curator trying to keep electronics in working order. The museum asked John Dallesasse, a professor of electrical engineering at Illinois, to take a look at the box, hoping that it just needed a new battery. Dallesasse’s mentor at Illinois was Holoynak, whose mentor was Bardeen. So Dallesasse considered himself Bardeen’s academic grandson. It soon became clear that one of the original point-contact transistors had failed, and several of the wax capacitors had degraded, Dallesasse told me recently. But returning the music box to operable status was not as simple as replacing those parts. Most professional conservators abide by a code of ethics that limits their intervention; they make only changes that can be easily reversed. In 2019, University of Illinois professor John Dallesasse carefully restored Bardeen’s music box.The Spurlock Museum/University of Illinois at Urbana-Champaign The museum was lucky in one respect: The point-contact transistor had failed as an open circuit instead of a short. This allowed Dallesasse to jumper in replacement parts, running wires from the music box to an external breadboard to bypass the failed components, instead of undoing any of the original soldering. He made sure to use time-period appropriate parts, including a working point-contact transistor borrowed from John’s son Bill Bardeen, even though that technology had been superseded by bipolar junction transistors. Despite Dallesasse’s best efforts, the rewired box emitted a slight hum at about 30 kilohertz that wasn’t present in the original. He concluded that it was likely due to the extra wiring. He adjusted some of the capacitor values to tune the tones closer to the box’s original sounds. Dallesasse and others recalled that the first tone had been lower. Unfortunately, the frequency could not be reduced any further because it was at the edge of performance for the oscillator. “Restoring the Bardeen Music Box” www.youtube.com From a preservation perspective, one of the most important things Dallesasse did was to document the restoration process. Bardeen had received the box as a gift without any documentation from the original designer, so Dallesasse mapped out the circuit, which helped him with the troubleshooting. Also, documentary filmmaker Amy Young and multimedia producer Jack Brighton recorded a short video of Dallesasse explaining his approach and technique. Now future historians have resources about the second life of the music box, and we can all hear a transistor-generated rendition of “How Dry I Am.” Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the December 2022 print issue as “John Bardeen’s Marvelous Music Box.” The Transistor at 75 The Transistor at 75 The past, present, and future of the modern world’s most important invention How the First Transistor Worked Even its inventors didn’t fully understand the point-contact transistor The Ultimate Transistor Timeline The transistor’s amazing evolution from point contacts to quantum tunnels The State of the Transistor in 3 Charts In 75 years, it’s become tiny, mighty, ubiquitous, and just plain weird 3D-Stacked CMOS Takes Moore’s Law to New Heights When transistors can’t get any smaller, the only direction is up The Transistor of 2047: Expert Predictions What will the device be like on its 100th anniversary? The Future of the Transistor Is Our Future Nothing but better devices can tackle humanity’s growing challenges John Bardeen’s Terrific Transistorized Music Box This simple gadget showed off the magic of the first transistor
  • The Future of the Transistor Is Our Future
    Nov 29, 2022 09:45 AM PST
    This is a guest post in recognition of the 75th anniversary of the invention of the transistor. It is adapted from an essay in the July 2022 IEEE Electron Device Society Newsletter. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. On the 75th anniversary of the invention of the transistor, a device to which I have devoted my entire career, I’d like to answer two questions: Does the world need better transistors? And if so, what will they be like? This article is part of our special report on the 75th anniversary of the invention of the transistor. I would argue, that yes, we are going to need new transistors, and I think we have some hints today of what they will be like. Whether we’ll have the will and economic ability to make them is the question. I believe the transistor is and will remain key to grappling with the impacts of global warming. With its potential for societal, economic, and personal upheaval, climate change calls for tools that give us humans orders-of-magnitude more capability. Semiconductors can raise the abilities of humanity like no other technology. Almost by definition, all technologies increase human abilities. But for most of them, natural resources and energy constraints make orders-of-magnitude improvements questionable. Transistor-enabled technology is a unique exception for the following reasons. As transistors improve, they enable new abilities such as computing and high-speed communication, the Internet, smartphones, memory and storage, robotics, artificial intelligence, and other things no one has thought of yet. These abilities have wide applications, and they transform all technologies, industries, and sciences. a. Semiconductor technology is not nearly as limited in growth by its material and energy usages as other technologies. ICs use relatively small amounts of material. And the less material they use, by being made even smaller, the faster, more energy efficient, and capable they become. Theoretically, the energy required for information processing can still be reduced to less than one-thousandth of what is required today. Although we do not yet know exactly how to approach such theoretical efficiency, we know that increasing energy efficiency a thousandfold would not violate physical laws. In contrast, the energy efficiencies of most other technologies, such as motors and lighting, are already at 30 to 80 percent of their theoretical limits. Transistors: past, present, and future How we’ll continue to improve transistor technology is relatively clear in the short term, but it gets murkier the farther out you go from today. In the near term, you can glimpse the transistor’s future by looking at its recent past. The basic planar (2D) MOSFET structure remained unchanged from 1960 until around 2010, when it became impossible to further increase transistor density and decrease the device’s power consumption. My lab at the University of California, Berkeley, saw that point coming more than a decade earlier. We reported the invention of the FinFET, the planar transistor’s successor, in 1999. FinFET, the first 3D MOSFET, changed the flat and wide transistor structure to a tall and narrow one. The benefit is better performance in a smaller footprint, much like the benefit of multistory buildings over single-story ones in a crowded city. The FinFET is also what’s called a thin-body MOSFET, a concept that continues to guide the development of new devices. It arose from the insight that current will not leak through a transistor within several nanometers of the silicon surface because the surface potential there is well controlled by the gate voltage. FinFETs take this thin-body concept to heart. The device’s body is the vertical silicon fin, which is covered by oxide insulator and gate metal, leaving no silicon outside the range of strong gate control. FinFETs reduced leakage current by orders of magnitude and lowered transistor operating voltage. It also pointed toward the path for further improvement: reducing the body thickness even more. The fin of the FinFET has become thinner and taller with each new technology node. But this progress has now become too difficult to maintain. So industry is adopting a new 3D thin-body CMOS structure, called gate-all-around (GAA). Here, a stack of ribbons of semiconductor make up the thin body. Each evolution of the MOSFET structure has been aimed at producing better control over charge in the silicon by the gate [pink]. Dielectric [yellow] prevents charge from moving from the gate into the silicon body [blue]. The 3D thin-body trend will continue from these 3D transistors to 3D-stacked transistors, 3D monolithic circuits, and multichip packaging. In some cases, this 3D trend has already reached great heights. For instance, the regularity of the charge-trap memory-transistor array allowed NAND flash memory to be the first IC to transition from 2D circuits to 3D circuits. Since the first report of 3D NAND by Toshiba in 2007, the number of stacked layers has grown from 4 to more than 200. Monolithic 3D logic ICs will likely start modestly, with stacking the two transistors of a CMOS inverter to reduce all logic gates’ footprints [see “3D-Stacked CMOS Takes Moore’s Law to New Heights”]. But the number of stacks may grow. Other paths to 3D ICs may employ the transfer or deposition of additional layers of semiconductor films, such as silicon, silicon germanium, or indium gallium arsenide onto a silicon wafer. The thin-body trend might meet its ultimate endpoint in 2D semiconductors, whose thickness is measured in atoms. Molybdenum disulfide molecules, for example, are both naturally thin and relatively large, forming a 2D semiconductor that may be no more than three atoms wide yet have very good semiconductor properties. In 2016, engineers in California and Texas used a film of the 2D-semiconductor molecule molybdenum disulfide and a carbon nanotube to demonstrate a MOSFET with a critical dimension: a gate length just 1 nanometer across. Even with a gate as short as 1 nm, the transistor leakage current was only 10 nanoamperes per millimeter, comparable with today’s best production transistor. “The progress of transistor technology has not been even or smooth.” One can imagine that in the distant future, the entire transistor may be prefabricated as a single molecule. These prefabricated building blocks might be brought to their precise locations in an IC through a process called directed-self-assembly (DSA). To understand DSA, it may be helpful to recall that a COVID virus uses its spikes to find and chemically dock itself onto an exact spot at the surface of particular human cells. In DSA, the docking spots, the “spikes,” and the transistor cargo are all carefully designed and manufactured. The initial docking spots may be created with lithography on a substrate, but additional docking spots may be brought in as cargo in subsequent steps. Some of the cargo may be removed by heat or other means if they are needed only during the fabrication process but not in the final product. Besides making transistors smaller, we’ll have to keep reducing their power consumption. Here we could see an order-of-magnitude reduction through the use of what are called negative-capacitance field-effect transistors (NCFET). These require the insertion of a nanometer-thin layer of ferroelectric material, such as hafnium zirconium oxide, in the MOSFET’s gate stack. Because the ferroelectric contains its own internal electric field, it takes less energy to switch the device on or off. An additional advantage of the thin ferroelectric is the possible use of the ferroelectric’s capacity to store a bit as the state of its electric field, thereby integrating memory and computing in the same device. The author [left] received the U.S. National Medal of Technology and Innovation from President Barack Obama [right] in 2016. Kevin Dietsch/UPI/Alamy To some degree, the devices I’ve described arose out of existing trends. But future transistors may have very different materials, structures, and operating mechanisms from those of today’s transistor. For example, the nanoelectromechanical switch is a return to the mechanical relays of decades past rather than an extension of the transistor. Rather than relying on the physics of semiconductors, it uses only metals, dielectrics, and the force between closely spaced conductors with different voltages applied to them. All these examples have been demonstrated with experiments years ago. However, bringing them to production will require much more time and effort than previous breakthroughs in semiconductor technology. Getting to the future Will we be able to achieve these feats? Some lessons from the past indicate that we could. The first lesson is that the progress of transistor technology has not been even or smooth. Around 1980, the rising power consumption per chip reached a painful level. The adoption of CMOS, replacing NMOS and bipolar technologies—and later, the gradual reduction of operation voltage from 5 volts to 1—gave the industry 30 years of more or less straightforward progress. But again, power became an issue. Between 2000 and 2010, the heat generated per square centimeter of IC was projected by thoughtful researchers to soon reach that of a nuclear-reactor core. The adoption of 3D thin-body FinFET and multicore processor architectures averted the crisis and ushered in another period of relatively smooth progress. The history of transistor technology may be described as climbing one mountain after another. Only when we got to the top of one were we able see the vista beyond and map a route to climb the next taller and steeper mountain. The second lesson is that the core strength of the semiconductor industry—nanofabrication—is formidable. History proves that, given sufficient time and economic incentives, the industry has been able to turn any idea into reality, as long as that idea does not violate scientific laws. But will the industry have sufficient time and economic incentives to continue climbing taller and steeper mountains and keep raising humanity’s abilities? It’s a fair question. Even as the fab industry’s resources grow, the mountains of technology development grow even faster. A time may come when no one fab company can reach the top of the mountain to see the path ahead. What happens then? The revenue of all semiconductor fabs (both independent and those, like Intel, that are integrated companies) is about one-third of the semiconductor industry revenue. But fabs make up just 2 percent of the combined revenues of the IT, telecommunications, and consumer-electronics industries that semiconductor technology enables. Yet the fab industry bears most of the growing burden of discovering, producing, and marketing new transistors and nanofabrication technologies. That needs to change. For the industry to survive, the relatively meager resources of the fab industry must be prioritized in favor of fab building and shareholder needs over scientific exploration. While the fab industry is lengthening its research time horizon, it needs others to take on the burden too. Humanity’s long-term problem-solving abilities deserve targeted public support. The industry needs the help of very-long-term exploratory research, publicly funded, in a Bell Labs–like setting or by university researchers with career-long timelines and wider and deeper knowledge in physics, chemistry, biology, and algorithms than corporate research currently allows. This way, humanity will continue to find new transistors and gain the abilities it will need to face the challenges in the centuries ahead. The Transistor at 75 The Transistor at 75 The past, present, and future of the modern world’s most important invention How the First Transistor Worked Even its inventors didn’t fully understand the point-contact transistor The Ultimate Transistor Timeline The transistor’s amazing evolution from point contacts to quantum tunnels The State of the Transistor in 3 Charts In 75 years, it’s become tiny, mighty, ubiquitous, and just plain weird 3D-Stacked CMOS Takes Moore’s Law to New Heights When transistors can’t get any smaller, the only direction is up The Transistor of 2047: Expert Predictions What will the device be like on its 100th anniversary? The Future of the Transistor Is Our Future Nothing but better devices can tackle humanity’s growing challenges John Bardeen’s Terrific Transistorized Music Box This simple gadget showed off the magic of the first transistor
  • Waiting for Superbatteries
    Nov 29, 2022 08:00 AM PST
    If grain must be dragged to market on an oxcart, how far can it go before the oxen eat up all the cargo? This, in brief, is the problem faced by any transportation system in which the vehicle must carry its own fuel. The key value is the density of energy, expressed with respect to either mass or volume. The era of large steam-powered ocean liners began during the latter half of the 19th century, when wood was still the world’s dominant fuel. But no liners fired their boilers with wood: There would have been too little space left for passengers and cargo. Soft wood, such as spruce or pine, packs less than 10 megajoules per liter, whereas bituminous coal has 2.5 times as much energy by volume and at least twice as much by mass. By comparison, gasoline has 34 MJ/L and diesel about 38 MJ/L. But in a world that aspires to leave behind all fuels (except hydrogen or maybe ammonia) and to electrify everything, the preferred measure of stored energy density is watt-hours per liter. By this metric, air-dried wood contains about 3,500 Wh/L, good steam coal around 6,500, gasoline 9,600, aviation kerosene 10,300, and natural gas (methane) merely 9.7—less than 1/1,000 the density of kerosene. How do batteries compare with the fuels they are to displace? The first practical battery, Gaston Planté’s lead-acid cell introduced in 1859, has gradually improved from less than 60 Wh/L to about 90 Wh/L. The nickel-cadmium battery, invented by Waldemar Jungner in 1899, now frequently stores more than 150 Wh/L, and today’s best mass-manufactured performers are lithium-ion batteries, the first commercial versions of which came out in 1991. The best energy density now commercially available in very large quantities for lithium-ion batteries is at 750 Wh/L, which is widely seen in electric cars. In 2020 Panasonic promised it would reach about 850 Wh/L by 2025 (and do so without the expensive cobalt). Eventually, the company aims to reach a 1,000-Wh/L product. Over the past 50 years, the highest energy density of mass-produced batteries has roughly quintupled Claims of new energy-density records for lithium-ion batteries appear regularly. In March 2021, Sion Power announced an 810-Wh/L pouch cell; three months later NanoGraf announced a cylindrical cell with 800 Wh/L. Earlier claims spoke of even loftier energy densities—QuantumScape mentioned a 1,000-Wh/L cell in a December 2020 claim, and Sion Power of a 1,400-Wh/L cell as far back as 2018. But Sion’s cells came from a pilot production line, not from a routine mass-scale operation, and QuantumScape’s claim was based on laboratory tests of single-layer cells, not on any commercially available multilayer products. The real-world leader seems to be Amprius Technologies of Fremont, Calif.: In February 2022, the company announced the first delivery of batteries rated as high as 1,150 Wh/L, to a maker of a new generation of high-altitude uncrewed aircraft, to be used to relay signals. This is obviously a niche market, orders of magnitude smaller than the potential market for electric vehicles, but it is a welcome confirmation of continuous density gains. There is a long way to go before batteries rival the energy density of liquid fuels. Over the past 50 years, the highest energy density of mass-produced batteries has roughly quintupled, from less than 150 to more than 700 Wh/L. But even if that trend continues for the next 50 years, we would still see top densities of about 3,500 Wh/L, no more than a third that of kerosene. The wait for superbatteries ready to power intercontinental flight may not be over by even 2070. This article appears in the December 2022 print issue.
  • The Transistor at 75
    Nov 29, 2022 05:00 AM PST
    Seventy-five years is a long time. It’s so long that most of us don’t remember a time before the transistor, and long enough for many engineers to have devoted entire careers to its use and development. In honor of this most important of technological achievements, this issue’s package of articles explores the transistor’s historical journey and potential future. This article is part of our special report on the 75th anniversary of the invention of the transistor. In “The First Transistor and How it Worked,” Glenn Zorpette dives deep into how the point-contact transistor came to be. Then, in “The Ultimate Transistor Timeline,” Stephen Cass lays out the device’s evolution, from the flurry of successors to the point-contact transistor to the complex devices in today’s laboratories that might one day go commercial. The transistor would never have become so useful and so ubiquitous if the semiconductor industry had not succeeded in making it small and cheap. We try to give you a sense of that scale in “The State of the Transistor.” So what’s next in transistor technology? In less than 10 years’ time, transistors could take to the third dimension, stacked atop each other, write Marko Radosavljevic and Jack Kavalieros in “Taking Moore’s Law to New Heights.” And we asked experts what the transistor will be like on the 100th anniversary of its invention in “The Transistor of 2047.” Meanwhile, IEEE’s celebration of the transistor’s 75th anniversary continues. The Electron Devices Society has been at it all year, writes Joanna Goodrich in The Institute, and has events planned into 2023 that you can get involved in. So go out and celebrate the device that made the modern world possible. The Transistor at 75 The Transistor at 75 The past, present, and future of the modern world’s most important invention How the First Transistor Worked Even its inventors didn’t fully understand the point-contact transistor The Ultimate Transistor Timeline The transistor’s amazing evolution from point contacts to quantum tunnels The State of the Transistor in 3 Charts In 75 years, it’s become tiny, mighty, ubiquitous, and just plain weird 3D-Stacked CMOS Takes Moore’s Law to New Heights When transistors can’t get any smaller, the only direction is up The Transistor of 2047: Expert Predictions What will the device be like on its 100th anniversary? The Future of the Transistor Is Our Future Nothing but better devices can tackle humanity’s growing challenges John Bardeen’s Terrific Transistorized Music Box This simple gadget showed off the magic of the first transistor
  • The EV Transition Explained: Can the Grid Cope?
    Nov 28, 2022 01:18 PM PST
    There have been vigorous debates pro and con in the United States and elsewhere over whether electric grids can support EVs at scale. The answer is a nuanced “perhaps.” It depends on several factors, including the speed of grid-component modernization, the volume of EV sales, where they occur and when, what kinds of EV charging are being done and when, regulator and political decisions, and critically, economics. The city of Palo Alto, Calif. is a microcosm of many of the issues involved. Palo Alto boasts the highest adoption rate of EVs in the United States: In 2020, one in six of the town’s 25,000 households owned an EV. Of the 52,000 registered vehicles in the city, 4,500 are EVs, and on workdays, commuters drive another 3,000 to 5,000 EVs to enter the city. Residents can access about 1,000 charging ports spread over 277 public charging stations, with another 3,500 or so charging ports located at residences. Palo Alto’s government has set a very aggressive Sustainability and Climate Action Plan with a goal of reducing its greenhouse gas emissions to 80 percent below the 1990 level by the year 2030. In comparison, the state’s goal is to achieve this amount by 2050. To realize this reduction, Palo Alto must have 80 percent of vehicles within the next eight years registered in (and commuting into) the city be EVs (around 100,000 total). The projected number of charging ports will need to grow to an estimated 6,000 to 12,000 public ports (some 300 being DC fast chargers) and 18,000 to 26,000 residential ports, with most of those being L2-type charging ports. “There are places even today where we can’t even take one more heat pump without having to rebuild the portion of the system. Or we can’t even have one EV charger go in.” —Tomm Marshall To meet Palo Alto’s 2030 emission-reduction goals, the city, which owns and operates the electric utility, would like to increase significantly the amount of local renewable energy being used for electricity generation (think rooftop solar) including the ability to use EVs as distributed-energy resources ( vehicle-to-grid (V2G) connections). The city has provided incentives for the purchase of both EVs and charging ports, the installation of heat-pump water heaters, and the installation of solar and battery-storage systems. The EV Transition Explained This is the third in a series of articles exploring the major technological and social challenges that must be addressed as we move from vehicles with internal-combustion engines to electric vehicles at scale. In reviewing each article, readers should bear in mind Nobel Prize–winning physicist Richard Feynman’s admonition: “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.” There are, however, a few potholes that need to be filled to meet the city’s 2030 emission objectives. At a February meeting of Palo Alto’s Utilities Advisory Commission, Tomm Marshall, assistant director of utilities, stated, “There are places even today [in the city] where we can’t even take one more heat pump without having to rebuild the portion of the [electrical distribution] system. Or we can’t even have one EV charger go in.” Peak loading is the primary concern. Palo Alto’s electrical-distribution system was built for the electric loads of the 1950s and 1960s, when household heating, water, and cooking were running mainly on natural gas. The distribution system does not have the capacity to support EVs and all electric appliances at scale, Marshall suggested. Further, the system was designed for one-way power, not for distributed-renewable-energy devices sending power back into the system. A big problem is the 3,150 distribution transformers in the city, Marshall indicated. A 2020 electrification-impact study found that without improvements, more than 95 percent of residential transformers would be overloaded if Palo Alto hits its EV and electrical-appliance targets by 2030. Palo Alto’s electrical-distribution system needs a complete upgrade to allow the utility to balance peak loads. For instance, Marshall stated, it is not unusual for a 37.5 kilovolt-ampere transformer to support 15 households, as the distribution system was originally designed for each household to draw 2 kilowatts of power. Converting a gas appliance to a heat pump, for example, would draw 4 to 6 kW, while an L2 charger for EVs would be 12 to 14 kW. A cluster of uncoordinated L2 charging could create an excessive peak load that would overload or blow out a transformer, especially when they are toward the end of their lives, which many already are. Without smart meters—that is, Advanced Metering Infrastructure (AMI), which will be introduced into Palo Alto in 2024—the utility has little to no household peak load insights. Palo Alto’s electrical-distribution system needs a complete upgrade to allow the utility to balance peak loads, manage two-way power flows, install the requisite number of EV charging ports and electric appliances to support the city’s emission-reduction goals, and deliver power in a safe, reliable, sustainable, and cybersecure manner. The system also must be able to cope in a multihour-outage situation, where future electrical appliances and EV charging will commence all at once when power is restored, placing a heavy peak load on the distribution system. PlugShare.comA map of EV charging stations in the Palo Alto, CA area from PlugShare.com Palo Alto is considering investing US $150 million toward modernizing its distribution system, but that will take two to three years of planning, as well as another three to four years or more to perform all the necessary work, but only if the utility can get the engineering and management staff, which continues to be in short supply there and at other utilities across the country. Further, like other industries, the energy business has become digitized, meaning the skills needed are different from those previously required. Until it can modernize its distribution network, Marshall conceded that the utility must continue to deal with angry and confused customers who are being encouraged by the city to invest in EVs, charging ports, and electric appliances, only then to be told that they may not be accommodated anytime soon. Policy runs up against engineering reality The situation in Palo Alto is not unique. There are some 465 cities in the United States with populations between 50,000 and 100,000 residents, and another 315 that are larger, many facing similar challenges. How many can really support a rapid influx of thousands of new EVs? Phoenix, for example, wants 280,000 EVs plying its streets by 2030, nearly seven times as many as it has currently. Similar mismatches between climate-policy desires and an energy infrastructure incapable of supporting those policies will play out across not only the United States but elsewhere in one form or another over the next two decades as conversion to EVs and electric appliances moves to scale. As in Palo Alto, it will likely be blown transformers or constantly flickering lights that signal there is an EV charging-load issue. Professor Deepak Divan, the director of the Center for Distributed Energy at Georgia Tech, says his team found that in residential areas “multiple L2 chargers on one distribution transformer can reduce its life from an expected 30 to 40 years to 3 years.” Given that most of the millions of U.S. transformers are approaching the end of their useful lives, replacing transformers soon could be a major and costly headache for utilities, assuming they can get them. Supplies for distribution transformers are low, and costs have skyrocketed from a range of $3,000 to $4,000 to $20,000 each. Supporting EVs may require larger, heavier transformers, which means many of the 180 million power poles on which these need to sit will need to be replaced to support the additional weight. Exacerbating the transformer loading problem, Divan says, is that many utilities “have no visibility beyond the substation” into how and when power is being consumed. His team surveyed “twenty-nine utilities for detailed voltage data from their AMI systems, and no one had it.” This situation is not true universally. Xcel Energy in Minnesota, for example, has already started to upgrade distribution transformers because of potential residential EV electrical-load issues. Xcel president Chris Clark told the Minneapolis Star Tribune that four or five families buying EVs noticeably affects the transformer load in a neighborhood, with a family buying an EV “adding another half of their house.” Joyce Bodoh, director of energy solutions and clean energy for Virginia’s Rappahannock Electric Cooperative (REC), a utility distributor in central Virginia, says that “REC leadership is really, really supportive of electrification, energy efficiency, and electric transportation.” However, she adds, “all those things are not a magic wand. You can’t make all three things happen at the same time without a lot of forward thinking and planning.” Total U.S. Energy Consumption For nearly 50 years, Lawrence Livermore National Laboratory has been publishing a Sankey diagram of estimated U.S. energy consumption from various generation sources, as shown above. In 2021, the United States consumed 97.3 quadrillion British thermal units (quads) of energy, with the transportation sector using 26.9 quads, 90 percent of it from petroleum. Obviously, as the transportation sector electrifies, electricity generation will need to grow in some reduced proportion of the energy once provided to the transportation section by petroleum, given the higher energy efficiency of EVs. To achieve the desired reduction in greenhouse gases, renewable-energy generation of electricity will need to replace fossil fuels. The improvements and replacements to the grid’s 8,000 power-generation units and 600,000 circuit miles of AC transmission lines (240,000 circuit miles being high-voltage lines) and 70,000 substations to support increased renewable energy and battery storage is estimated to be more than $2.5 trillion in capital, operations, and maintenance costs by 2035. In the short term, it is unlikely that EVs will create power shortfalls in the U.S. grid, but the rising number of EVs will test the local grid’s reliability at many of the 3,000 electric-distribution utilities in the United States, which themselves own more than 5.5 million miles of power lines. It is estimated that these utilities need $1 trillion in upgrades by 2035. As part of this planning effort, Bodoh says that REC has actively been performing “an engineering study that looked at line loss across our systems as well as our transformers, and said, ‘If this transformer got one L2 charger, what would happen? If it got two L2s, what would happen, and so on?’” She adds that REC “is trying to do its due diligence, so we don’t get surprised when a cul-de-sac gets a bunch of L2 chargers and there’s a power outage.” REC also has hourly energy-use data from which it can find where L2 chargers may be in use because of the load profile of EV charging. However, Bodoh says, REC does not just want to know where the L2 chargers are, but also to encourage its EV-owning customers to charge at nonpeak hours—that is, 9 p.m. to 5 a.m. and 10 a.m. to 2 p.m. REC has recently set up an EV charging pilot program for 200 EV owners that provides a $7 monthly credit if they do off-peak charging. Whether REC or other utilities can convince enough EV owners of L2 chargers to consistently charge during off-peak hours remains to be seen. “Multiple L2 chargers on one distribution transformer can reduce its life from an expected 30 to 40 years to 3 years.” —Deepak Divan Even if EV owner behavior changes, off-peak charging may not fully solve the peak-load problem once EV ownership really ramps up. “Transformers are passively cooled devices,” specifically designed to be cooled at night, says Divan. “When you change the (power) consumption profile by adding several EVs using L2 chargers at night, that transformer is running hot.” The risk of transformer failure from uncoordinated overnight charging may be especially aggravated during times of summer heat waves, an issue that concerns Palo Alto’s utility managers. There are technical solutions available to help spread EV charging peak loads, but utilities will have to make the investments in better transformers and smart metering systems, as well as get regulatory permission to change electricity-rate structures to encourage off-peak charging. Vehicle-to-grid (V2G), which allows an EV to serve as a storage device to smooth out grid loads, may be another solution, but for most utilities in the United States, this is a long-term option. Numerous issues need to be addressed, such as the updating of millions of household electrical panels and smart meters to accommodate V2G, the creation of agreed-upon national technical standards for the information exchange needed between EVs and local utilities, the development of V2G regulatory policies, and residential and commercial business models, including fair compensation for utilizing an EV’s stored energy. As energy expert Chris Nelder noted at a National Academy EV workshop, “vehicle-to-grid is not really a thing, at least not yet. I don’t expect it to be for quite some time until we solve a lot of problems at various utility commissions, state by state, rate by rate.” In the next article in the series, we will look at the complexities of creating an EV charging infrastructure.
  • The James Webb Space Telescope was a Career-Defining Project for Janet Barth
    Nov 28, 2022 11:00 AM PST
    Janet Barth spent most of her career at the Goddard Space Flight Center, in Greenbelt, Md.—which put her in the middle of some of NASA’s most exciting projects of the past 40 years. She joined the center as a co-op student and retired in 2014 as chief of its electrical engineering division. She had a hand in Hubble Space Telescope servicing missions, launching the Lunar Reconnaissance Orbiter and the Magnetospheric Multiscale mission, and developing the James Webb Space Telescope. About Janet Barth Employer: Miller Engineering and Research Corp. Title: Advisory board member Member grade: Life Fellow Alma mater: University of Maryland in College Park Barth, an IEEE Life Fellow, conducted pioneering work in analyzing the effects of cosmic rays and solar radiation on spacecraft observatories. Her tools and techniques are still used today. She also helped develop science requirements for NASA’s Living With a Star program, which studies the sun, magnetospheres, and planetary systems. For her work, Barth was honored with this year’s IEEE Marie Sklodowska-Curie Award for “leadership of and contributions to the advancement of the design, building, deployment, and operation of capable, robust space systems.” “I still tear up just thinking about it,” Barth says. “Receiving this award is humbling. Everyone at IEEE and Goddard who I worked with owns a piece of this award.” From co-op hire to chief of NASA’s EE division Barth initially attended the University of Michigan in Ann Arbor, to pursue a degree in biology, but she soon realized that it wasn’t a good fit for her. She transferred to the University of Maryland in College Park, and changed her major to applied mathematics. She was accepted for a co-op position in 1978 at the Goddard center, which is about 9 kilometers from the university. Co-op jobs allow students to work at a company and gain experience while pursuing their degree. “I was excited about using my analysis and math skills to enable new science at Goddard,” she says. She conducted research on radiation environments and their effects on electronic systems. Goddard hired her after she graduated as a radiation and hardness assurance engineer. She helped ensure that the electronics and materials in space systems would perform as designed after being exposed to radiation in space. Because of her expertise in space radiation, George Withbroe, director of the NASA Solar-Terrestrial Physics program (now its Heliophysics Division), asked her in 1999 to help write a funding proposal for a program he wanted to launch—which became Living With a Star. It received US $2 billion from the U.S. Congress and launched in 2001. During her 12 years with the program, Barth helped write the architecture document, which she says became a seminal publication for the field of heliophysics (the study of the sun and how it influences space). The document outlines the program’s goals and objectives. In 2001 she was selected to be project manager for a NASA test bed that aimed to understand how spacecraft are affected by their environment. The test bed, which collected data from space to predict how radiation might impact NASA missions, successfully completed its mission in 2020. Barth reached the next rung on her career ladder in 2002, when she became one of the first female associate branch heads of engineering at Goddard. At the space center’s Flight Data Systems and Radiation Effects Branch, she led a team of engineers who designed flight computers and storage systems. Although it was a steep learning curve for her, she says, she enjoyed it. Three years later, she was heading the branch. She got another promotion, in 2010, to chief of the electrical engineering division. As the Goddard Engineering Directorate’s first female division chief, she led a team of 270 employees who designed, built, and tested electronics and electrical systems for NASA instruments and spacecraft. Barth (left) and Moira Stanton at the 1997 RADiation and its Effects on Components and Systems Conference, held in Cannes, France. Barth and Stanton coauthored a poster paper and received the outstanding poster paper award.Janet Barth Working on the James Webb Space Telescope Throughout her career, Barth was involved in the development of the Webb space telescope. Whenever she thought that she was done with the massive project, she says with a laugh, her path would “intersect with Webb again.” She first encountered the Webb project in the late 1990s, when she was asked to be on the initial study team for the telescope. She wrote its space-environment specifications. After they were published in 1998, however, the team realized that there were several complex problems to solve with the telescope’s detectors. The Goddard team supported Matt Greenhouse, John C. Mather, and other engineers to work on the tricky issues. Greenhouse is a project scientist for the telescope’s science instrument payload. Mather won the 2006 Nobel Prize in Physics for discoveries supporting the Big Bang model. The Webb’s detectors absorb photons—light from far-away galaxies, stars, and planets—and convert them into electronic voltages. Barth and her team worked with Greenhouse and Mather to verify that the detectors would work while exposed to the radiation environment at the L2 Lagrangian point, one of the positions in space where human-sent objects tend to stay put. Years later, when Barth was heading the Flight Data Systems and Radiation Effects branch, she oversaw the development of the telescope’s instrument command and data handling systems. Because of her important role, Barth’s name was written on the telescope’s instrument ICDH flight box. When she became chief of Goddard’s electrical engineering division, she was assigned to the technical review panel for the telescope. “At that point,” she says, “we focused on the mechanics of deployment and the risks that came with not being able to fully test it in the environment it would be launched and deployed in.” She served on that panel until she retired. In 2019, five years after retiring, she joined the Miller Engineering and Research Corp. advisory board. The company, based in Pasadena, Md., manufactures parts for aerospace and aviation organizations. “I really like the ethics of the company. They service science missions and crewed missions,” Barth says. “I went back to my roots, and that’s been really rewarding.” The best things about being an IEEE member Barth and her husband, Douglas, who is also an engineer, joined IEEE in 1989. She says they enjoy belonging to a “unique peer group.” She especially likes attending IEEE conferences, having access to journals, and being able to take continuing education courses and workshops, she says. “I stay up to date on the advancements in science and engineering,” she says, “and going to conferences keeps me inspired and motivated in what I do.” The networking opportunities are “terrific,” she adds, and she’s been able to meet people from just about all engineering industries. An active IEEE volunteer for more than 20 years, she is executive chairwoman of the IEEE Nuclear and Plasma Sciences Society’s Radiation Effects Steering Group, and she served as 2013–2014 president of the IEEE Nuclear and Plasma Sciences Society. She also is an associate editor for IEEE Transactions on Nuclear Science. “IEEE has definitely benefited my career,” she says. “There’s no doubt about that.”
  • The Ultimate Transistor Timeline
    Nov 27, 2022 08:00 AM PST
    Even as the initial sales receipts for the first transistors to hit the market were being tallied up in 1948, the next generation of transistors had already been invented (see “The First Transistor and How it Worked.”) Since then, engineers have reinvented the transistor over and over again, raiding condensed-matter physics for anything that might offer even the possibility of turning a small signal into a larger one. This article is part of our special report on the 75th anniversary of the invention of the transistor. IEEE Spectrum But physics is one thing; mass production is another. This timeline shows the time elapsed between the invention of several transistor types and the year they became commercially available. To be honest, finding the latter set of dates was often a murky business, and we welcome corrections. But it’s clear that the initial breakneck pace of innovation seems to have slowed from 1970 to 2000, likely because these were the golden years for Moore’s Law, when scaling down the dimensions of the existing metal-oxide-semiconductor field-effect transistors (MOSFETs) led to computers that doubled in speed every couple of years for the same money. Then, when the inevitable end of this exponential improvement loomed on the horizon, a renaissance in transistor invention seems to have begun and continues to this day. This article appears in the December 2022 print issue. The Transistor at 75 The Transistor at 75 The past, present, and future of the modern world’s most important invention How the First Transistor Worked Even its inventors didn’t fully understand the point-contact transistor The Ultimate Transistor Timeline The transistor’s amazing evolution from point contacts to quantum tunnels The State of the Transistor in 3 Charts In 75 years, it’s become tiny, mighty, ubiquitous, and just plain weird 3D-Stacked CMOS Takes Moore’s Law to New Heights When transistors can’t get any smaller, the only direction is up The Transistor of 2047: Expert Predictions What will the device be like on its 100th anniversary? The Future of the Transistor Is Our Future Nothing but better devices can tackle humanity’s growing challenges John Bardeen’s Terrific Transistorized Music Box This simple gadget showed off the magic of the first transistor
  • The State of the Transistor in 3 Charts
    Nov 26, 2022 08:00 AM PST
    The most obvious change in transistor technology in the last 75 years has been just how many we can make. Reducing the size of the device has been a titanic effort and a fantastically successful one, as these charts show. But size isn’t the only feature engineers have been improving. This article is part of our special report on the 75th anniversary of the invention of the transistor. In 1947, there was only one transistor. According to TechInsight’s forecast, the semiconductor industry is on track to produce almost 2 billion trillion (1021) devices this year. That’s more transistors than were cumulatively made in all the years prior to 2017. Behind that barely conceivable number is the continued reduction in the price of a transistor, as engineers have learned to integrate more and more of them into the same area of silicon. Scaling down transistors in the 2D space of the plane of the silicon has been a smashing success: Transistor density in logic circuits has increased more than 600,000-fold since 1971. Reducing transistor size requires using shorter wavelengths of light, such as extreme ultraviolet, and other lithography tricks to shrink the space between transistor gates and between metal interconnects. Going forward, it’s the third dimension, where transistors will be built atop one another, that counts. This trend is more than a decade old in flash memory, but it’s still in the future for logic (see “Taking Moore’s Law to New Heights.”) Perhaps the crowning achievement of all this effort is the ability to integrate millions, even billions, of transistors into some of the most complex systems on the planet: CPUs. Here’s a look at some of the high points along the way. What Transistors Have Become Besides making them tiny and numerous, engineers have devoted their efforts to enhancing the device’s other qualities. Here is a small sampling of what transistors have become in the last 75 years: Ephemeral: Researchers in Illinois developed circuits that dissolve in the body using a combination of ultrathin silicon membranes, magnesium conductors, and magnesium oxide insulators. Five minutes in water was enough to turn the first generation to mush. But recently researchers used a more durable version to make temporary cardiac pacemakers that release an anti-inflammatory drug as they disappear. Fast: The first transistor was made for radio frequencies, but there are now devices that operate at about a billion times those frequencies. Engineers in South Korea and Japan reported the invention of an indium gallium arsenide high-electron mobility transistor, or HEMT, that reached a maximum frequency of 738 gigahertz. Seeking raw speed, engineers at Northrop Grumman made a HEMT that passed 1 terahertz. Flat: Today’s (and yesterday’s) transistors depend on the semiconducting properties of bulk (3D) materials. Tomorrow’s devices might rely on 2D semiconductors, such as molybdenum disulfide and tungsten disulfide. These transistors might be built in the interconnect layers above a processor’s silicon, researchers say. So 2D semiconductors could help lead to 3D processors. Flexible: The world is not flat, and neither are the places transistors need to operate. Using indium gallium arsenide, engineers in South Korea recently made high-performance logic transistors on plastic that hardly suffered when bent around a radius of just 4 millimeters. And engineers in Illinois and England have made microcontrollers that are both affordable and bendable. Invisible: When you need to hide your computing in plain sight, turn to transparent transistors. Researchers in Fuzhou, China, recently made a see-through analogue of flash memory using organic semiconductor thin-film transistors. And researchers in Japan and Malaysia produced transparent diamond devices capable of handling more than 1,000 volts. Mnemonic: NAND flash memory cells can store multiple bits in a single device. Those on the market today store either 3 or 4 bits each. Researchers at Kioxia Corp. built a modified NAND flash cell and dunked it in 77-kelvin liquid nitrogen. A single superchilled transistor could store up to 7 bits of data, or 128 different values. Talented: In 2018, engineers in Canada used an algorithm to generate all the possible unique and functional elementary circuits that can be made using just two metal-oxide field-effect transistors. The number of circuits totaled an astounding 582. Increasing the scope to three transistors netted 56,280 circuits, including several amplifiers previously unknown to engineering. Tough: Some transistors can take otherworldly punishment. NASA Glenn Research Center built 200-transistor silicon carbide ICs and operated them for 60 days in a chamber that simulates the environment on the surface of Venus—460 °C heat, a planetary-probe-crushing 9.3 megapascals of pressure, and the hellish planet’s corrosive atmosphere. This article appears in the December 2022 print issue as “The State of the Transistor.” The Transistor at 75 The Transistor at 75 The past, present, and future of the modern world’s most important invention How the First Transistor Worked Even its inventors didn’t fully understand the point-contact transistor The Ultimate Transistor Timeline The transistor’s amazing evolution from point contacts to quantum tunnels The State of the Transistor in 3 Charts In 75 years, it’s become tiny, mighty, ubiquitous, and just plain weird 3D-Stacked CMOS Takes Moore’s Law to New Heights When transistors can’t get any smaller, the only direction is up The Transistor of 2047: Expert Predictions What will the device be like on its 100th anniversary? The Future of the Transistor Is Our Future Nothing but better devices can tackle humanity’s growing challenges John Bardeen’s Terrific Transistorized Music Box This simple gadget showed off the magic of the first transistor
  • Video Friday: Turkey Sandwich
    Nov 25, 2022 09:13 AM PST
    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND Enjoy today’s videos! Happy Thanksgiving, for those who celebrate it. Now spend 10 minutes watching a telepresence robot assemble a turkey sandwich. [ Sanctuary ] Ayato Kanada, an assistant professor at Kyushu University, in Japan, wrote in to share “the world’s simplest omnidirectional mobile robot.” We propose a palm-sized omnidirectional mobile robot with two torus wheels. A single torus wheel is made of an elastic elongated coil spring in which the two ends of the coil connected each other and is driven by a piezoelectric actuator (stator) that can generate 2-degrees-of-freedom (axial and angular) motions. The stator converts its thrust force and torque into longitudinal and meridian motions of the torus wheel, respectively, making the torus work as an omnidirectional wheel on a plane. [ Paper ] Thanks, Ayato! This work, entitled “Virtually turning robotic manipulators into worn devices: opening new horizons for wearable assistive robotics,” proposes a novel hybrid system using a virtually worn robotic arm in augmented reality, and a real robotic manipulator servoed on such a virtual representation. We basically aim at creating the illusion of wearing a robotic system while its weight is fully supported. We believe that this approach could offer a solution to the critical challenge of weight and discomfort caused by robotic sensorimotor extensions—such as supernumerary robotic limbs (SRL), prostheses, or handheld tools—and open new horizons for the development of wearable robotics. [ Paper ] Thanks, Nathanaël! Engineers at Georgia Tech are the first to study the mechanics of springtails, which leap in the water to avoid predators. The researchers learned how the tiny hexapods control their jumps, self-right in midair, and land on their feet in the blink of an eye. The team used the findings to build penny-size jumping robots. [ Georgia Tech ] Thanks, Jason! The European Space Agency (ESA) and the European Space Resources Innovation Centre (ESRIC) have asked European space industries and research institutions to develop innovative technologies for the exploration of resources on the moon in the framework of the ESA-ESRIC Space Resources Challenge. As part of the challenge, teams of engineers have developed vehicles capable of prospecting for resources in a test-bed simulating the moon’s shaded polar regions. From 5 to 9 September 2022, the final of the ESA-ESRIC Space Resource Challenge took place at the Rockhal in Esch-sur-Alzette. On this occasion, lunar rover prototypes competed on a 1,800-square-meter “lunar” terrain. The winning team will have the opportunity to have their technology implemented on the moon. [ ESA ] Thanks, Arne! If only cobots were as easy to use as this video from Kuka makes it seem. The Kuka website doesn’t say how much this thing costs, which means it’s almost certainly not something that you impulse buy. [ Kuka ] We present the tensegrity aerial vehicle, a design of collision-resilient rotor robots with icosahedron tensegrity structures. With collision resilience and reorientation ability, the tensegrity aerial vehicles can operate in cluttered environments without complex collision-avoidance strategies. These capabilities are validated by a test of an experimental tensegrity aerial vehicle operating with only onboard inertial sensors in a previously unknown forest. [ HiPeR Lab ] Apparently, the World Cup needs more drone footage, because this is kinda neat. [ DJI ] Researchers at MIT’s Center for Bits and Atoms have made significant progress toward creating robots that could build nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots. [ MIT ] The researchers from North Carolina State University have recently developed a fast and efficient soft robotic swimmer whose motions resemble a human’s butterfly-stroke style. It can achieve a high average swimming speed of 3.74 body lengths per second, close to five times as fast as the fastest similar soft swimmers, and also a high-power efficiency with a low energy cost. [ NC State ] To facilitate sensing and physical interaction in remote and/or constrained environments, high-extension, lightweight robot manipulators are easier to transport and can reach substantially further than traditional serial-chain manipulators. We propose a novel planar 3-degrees-of-freedom manipulator that achieves low weight and high extension through the use of a pair of spooling bistable tapes, commonly used in self-retracting tape measures, which are pinched together to form a reconfigurable revolute joint. [ Charm Lab ] SLURP! [ River Lab ] This video may encourage you to buy a drone. Or a snowmobile. [ Skydio ] Moxie is getting an update for the holidays! [ Embodied ] Robotics professor Henny Admoni answers the Internet’s burning questions about robots! How do you program a personality? Can robots pick up a single M&M? Why do we keep making humanoid robots? What is Elon Musk’s goal for the Tesla Optimus robot? Will robots take over my job writing video descriptions...I mean, um, all our jobs? Henny answers all these questions and much more. [ CMU ] This GRASP on Robotics talk is from Julie Adams at Oregon State University, on “Towards Adaptive Human-Robot Teams: Workload Estimation.” The ability for robots, be it a single robot, multiple robots, or a robot swarm, to adapt to the humans with which they are teamed requires algorithms that allow robots to detect human performance in real time. The multidimensional workload algorithm incorporates physiological metrics to estimate overall workload and its components (cognitive, speech, auditory, visual, and physical). The algorithm is sensitive to changes in a human’s individual workload components and overall workload across domains, human-robot teaming relationships (supervisory, peer-based), and individual differences. The algorithm has also been demonstrated to detect shifts in workload in real time in order to adapt the robot’s interaction with the human and autonomously change task responsibilities when the human’s workload is over- or underloaded. Recently, the algorithm was used to analyze post hoc the resulting workload for a single human deploying a heterogeneous robot swarm in an urban environment. Current efforts are focusing on predicting the human’s future workload, recognizing the human’s current tasks, and estimating workload for previously unseen tasks. [ UPenn ]
  • Tickle Pill Bug Toes With These Haptic Microfingers
    Nov 24, 2022 06:00 AM PST
    All things considered, we humans are kind of big, which is very limiting in how we can comfortably interact with the world. The practical effect of this is that we tend to prioritize things that we can see and touch and otherwise directly experience, even if those things are only a small part of the world in which we live. A recent study conservatively estimates that there are 2.5 million ants for every one human on Earth. And that’s just ants. There are probably something like 7 million different species of terrestrial insects, and humans have only even noticed like 10 percent of them. The result of this disconnect is that when (for example) insect populations around the world start to crater, it takes us much longer to first notice, care, and act. To give the small scale the attention that it deserves, we need a way of interacting with it. In a paper recently published in Scientific Reports, roboticists from Ritsumeikan University, in Japan, demonstrate a haptic teleoperation system that connects a human hand on one end with microfingers on the other, letting the user feel what it’s like to give a pill bug a tummy rub. At top, a microfinger showing the pneumatic balloon actuator (PBA) and liquid-metal strain gauge. At bottom left, when the PBA is deflated, the microfinger is straight. At bottom right, inflating the PBA causes the finger to bend downward. These microfingers are just 12 millimeters long, 3 mm wide, and 490 micrometers (μm) thick. Inside of each microfinger is a pneumatic balloon actuator, which is just a hollow channel that can be pressurized with air. Because the channel is on the top of the microfinger, when the channel is inflated, it bulges upward, causing the microfinger to bend down. When pressure is reduced, the microfinger returns to its original position. Separate channels in the microfinger are filled with liquid metal, and as the microfinger bends, the channels elongate, thinning out the metal. By measuring the resistance of the metal, you can tell how much the finger is being bent. This combination of actuation and force sensing means that a human-size haptic system can be used as a force feedback interface: As you move your fingers, the microfingers will move, and forces can be transmitted back to you, allowing you to feel what the microfingers feel. The microfingers [left] can be connected to a haptic feedback-and-control system for use by a human. Fans of the golden age of science fiction will recognize this system as a version of Waldo F. Jones’s Synchronous Reduplicating Pantograph, although the concept has even deeper roots in sci-fi: The thought suddenly struck me: I can make micro hands for my little hands. I can make the same gloves for them as I did for my living hands, use the same system to connect them to the handles ten times smaller than my micro arms, and then...I will have real micro arms, they will chop my movements two hundred times. With these hands I will burst into such a smallness of life that they have only seen, but where no one else has disposed of their own hands. And I got to work. With their very real and not science fiction system, the researchers were able to successfully determine that pill bugs can exert about 10 micronewtons of force through their legs, which is about the same as what has been estimated using other techniques. This is just a proof-of-concept study, but I’m excited about the potential here, because there is still so much of the world that humans haven’t yet been able to really touch. And besides just insect-scale tickling, there’s a broader practical context here around the development of insect-scale robots. Insects have had insect-scale sensing and mobility and whatnot pretty well figured out for a long time now, and if we’re going to make robots that can do insectlike things, we’re going to do it by learning as much as we can directly from insects themselves. “With our strain-sensing microfinger, we were able to directly measure the pushing motion and force of the legs and torso of a pill bug—something that has been impossible to achieve previously. We anticipate that our results will lead to further technological development for microfinger-insect interactions, leading to human-environment interactions at much smaller scales.” —Satoshi Konishi, Ritsumeikan University I should also be clear that despite the headline, I don’t know if it’s actually possible to tickle a bug. A Google search for “are insects ticklish” turns up one single result, from someone asking this question on the “StonerThoughts” subreddit. There is some suggestion that tickling, or more specifically the kind of tickling that is surprising and can lead to laughter called gargalesis, has evolved in social mammals to promote bonding. The other kind of tickling is called knismesis, which is more of an unpleasant sensation that causes irritation or distress. You know, like the feeling of a bug crawling on you. It seems plausible (to me, anyway) that bugs may experience some kind of knismesis—but I think that someone needs to get in there and do some science, especially now that we have the tools to make it happen.
  • IEEE SIGHT Founder Amarnath Raja Dies at 65
    Nov 23, 2022 11:00 AM PST
    Amarnath Raja Founder of IEEE Special Interest Group on Humanitarian Technology Senior member, 65; died 5 September Raja founded the IEEE Special Interest Group on Humanitarian Technology (SIGHT) in 2011. The global network partners with underserved communities and local organizations to leverage technology for sustainable development. He began his career in 1980 as a management trainee at the National Dairy Development Board, in Anand, India. A year later he joined Milma, a state government marketing cooperative for the dairy industry, in Thiruvananthapuram, as a manager of planning and systems. After 15 years with Milma, he joined IBM in Tokyo as a manager of technology services. In 2000 he helped found InApp, a company in Palo Alto, Calif., that provides software development services. He served as its CEO and executive chairman until he died. Raja was the 2011–2012 chair of the IEEE Humanitarian Activities Committee. He wanted to find a way to mobilize engineers to apply their expertise to develop sustainable solutions that help their local community. To achieve the goal, in 2011 he founded IEEE SIGHT. Today there are more than 150 SIGHT groups in 50 countries that are working on projects such as sustainable irrigation and photovoltaic systems. For his efforts, he received the 2015 Larry K. Wilson Transnational Award from IEEE Member and Geographic Activities. The award honors effective efforts to fulfill one or more of the MGA goals and strategic objectives related to transnational activities. For the past two years, Rajah chaired the IEEE Admission and Advancement Review Panel, which approves applications for new members and elevations to higher membership grades. He was a member of the International Centre for Free and Open Source Software’s advisory board. The organization was established by the government of Kerala, India, to facilitate the development and distribution of free, open-source software. Raja also served as one of the directors of the nongovernmental organization Bedroc.in, which was established to continue the disaster rehabilitation work started by him and his team after the 2004 Indian Ocean tsunami. He earned his bachelor’s degree in chemical engineering in 1979 from the Indian Institute of Technology in Delhi. Donn S. Terry Software engineer Life member, 74; died 14 September Terry was a computer engineer at Hewlett-Packard in Fort Collins, Colo., for 18 years. He joined HP in 1978 as a software developer, and he chaired the Portable Operating System Interface (POSIX) working group. POSIX is a family of standards specified by the IEEE Computer Society for maintaining compatibility among operating systems. While there, he also developed software for the Motorola 68000 microprocessor. Terry left HP in 1997 to join Softway Solutions, also in Fort Collins, where he developed tools for Interix, a Unix subsystem of the Windows NT operating system. After Microsoft acquired Softway in 1999, he stayed on as a senior software development engineer at its Seattle location. There he worked on static analysis, a method of computer-program debugging that is done by examining the code without executing the program. He also helped to create SAL, a Microsoft source-code annotation language, which was developed to make code design easier to understand and analyze. Terry retired in 2014. He loved science fiction, boating, cooking, and spending time with his family, according to his daughter, Kristin. He earned a bachelor’s degree in electrical engineering in 1970 and a Ph.D. in computer science in 1978, both from the University of Washington in Seattle. William Sandham Signal processing engineer Life senior member, 70; died 25 August Sandham applied his signal processing expertise to a wide variety of disciplines including medical imaging, biomedical data analysis, and geophysics. He began his career in 1974 as a physicist at the University of Glasgow. While working there, he pursued a Ph.D. in geophysics. He earned his degree in 1981 at the University of Birmingham in England. He then joined the British National Oil Corp. (now Britoil) as a geophysicist. In 1986 he left to join the University of Strathclyde, in Glasgow, as a lecturer in the signal processing department. During his time at the university, he published more than 200 journal papers and five books that addressed blood glucose measurement, electrocardiography data analysis and compression, medical ultrasound, MRI segmentation, prosthetic limb fitting, and sleep apnea detection. Sandham left the university in 2003 and founded Scotsig, a signal processing consulting and research business, also in Glasgow. He served on the editorial board of IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing and the EURASIP Journal on Advances in Signal Processing. He was a Fellow of the Institution of Engineering and Technology and a member of the European Association of Geoscientists and Engineers and the Society of Exploration Geophysicists. Sandham earned his bachelor’s degree in electrical engineering in 1974 from the University of Glasgow. Stephen M. Brustoski Loss-prevention engineer Life member, 69; died 6 January For 40 years, Brustoski worked as a loss-prevention engineer for insurance company FM Global. He retired from the company, which was headquartered in Johnston, R.I., in 2014. He was an elder at his church, CrossPoint Alliance, in Akron, Ohio, where he oversaw administrative work and led Bible studies and prayer meetings. He was an assistant scoutmaster for 12 years, and he enjoyed hiking and traveling the world with his family, according to his wife, Sharon. Brustoski earned a bachelor’s degree in electrical engineering in 1973 from the University of Akron. Harry Letaw President and CEO of Essex Corp. Life senior member, 96; died 7 May 2020 As president and CEO of Essex Corp., in Columbia, Md., Letaw handled the development and commercialization of optoelectronic and signal processing solutions for defense, intelligence, and commercial customers. He retired in 1995. He had served in World War II as an aviation engineer for the U.S. Army. After he was discharged, he earned a bachelor’s degree in chemistry, then a master’s degree and Ph.D., all from the University of Florida in Gainesville, in 1949, 1951, and 1952. After he graduated, he became a postdoctoral assistant at the University of Illinois at Urbana-Champaign. He left to become a researcher at Raytheon Technologies, an aerospace and defense manufacturer, in Wayland, Mass. Letaw was a member of the American Physical Society and the Phi Beta Kappa and Sigma Xi honor societies.
  • Hi-fi, Radio, and Retro: The DIY Projects Spectrum Readers Love
    Nov 23, 2022 08:13 AM PST
    This month we’re celebrating the launch of our second PDF collection of Hands On articles, which IEEE members can download from IEEE Spectrum’s website and share with friends. So we thought we’d take a look at the relative popularity of Hands On articles published over the last five years and share the top 15 projects our website visitors found most interesting. Just to give a little peek behind our analytics curtain, the measure of popularity Spectrum’s editors use is “total engaged minutes,” or TEM, which combines page views of articles with how long visitors spend reading them. We use TEM because we’re not terribly interested in grabbing folks with a clickbait headline, only for them to bounce out before they’ve finished reading the first paragraph. The first thing that jumps out is that Spectrum readers love good quality audio, but unlike some audiophiles, they don’t see an exorbitant price tag or voodoo components as a badge of honor. Far and away our most popular article in the last five years has been “Build Your Own Professional-Grade Audio Amp on the Sort of Cheap” (November 2018). And a follow-up to that article, “A Web-Enabled, High Quality, DIY Audio Amp” (March 2022), comes in at No. 9. They all share that magic element of unexpected delight Radio is another popular subject, although not radio as, say, old-school hams might know it. A third of the top 15 articles relate to wireless and radio tech, with one concerning a home-brew radio telescope (October 2019). The other four are about exchanging data from distances ranging from a few tens of meters to hundreds of kilometers. The final cluster falls under the umbrella of retrotech. Sometimes it’s about a functionally identical replica of a legendary computer, as in “Build Your Own Altair 8800 Personal Computer” (March 2018), but more often it's about remixing new technology using the principles of the old to better understand the latter. “Build This 8-Bit Home Computer With Just 5 Chips” (April 2020) revisited the clever hack that allowed early home computers with only digital circuits to display color graphics on analog televisions. “Print an Arduino-Powered Color Mechanical Television” (June 2022) went even further back in broadcast history to reveal the surprising quality that electromechanical televisions were capable of. The remaining articles are a potpourri of topics, but I think they all share that magic element of unexpected delight that engineers are always hoping for. With just a bit of know-how applied in the right way, the world becomes a bit more interesting. "Build a RISC-V CPU From Scratch” (June 2021) showed that it was possible to design modern computer architectures at home without needing exotic tools or a semiconductor fab, while “Use Your Bike as a Backup to Your Backup Power Supply” (November 2020) spoke to engineers’ natural skepticism of marketing promises regarding reliability by showing how you could use a conventional bicycle as a backup to your backup power supply. Are there trends in DIY projects you think we’re missing? Drop me a line at cass.s@ieee.org! This article appears in the December 2022 print issue as “Hi-fi, Radio, Retro, and More.”
  • Robot Gift Guide 2022
    Nov 22, 2022 01:05 PM PST
    It’s been a couple of years, but the IEEE Spectrum Robot Gift Guide is back for 2022! We’ve got all kinds of new robots, and right now is an excellent time to buy one (or a dozen), since many of them are on sale this week. We’ve tried to focus on consumer robots that are actually available (or that you can at least order), but depending on when you’re reading this guide, the prices we have here may not be up to date, and we’re not taking shipping into account. And if these robots aren’t enough for you, many of our picks from years past are still available: check out our guides from 2019, 2018, 2017, 2016, 2015, 2014, 2013, and 2012. And as always, if you have suggestions that you’d like to share, post a comment to help the rest of us find the perfect robot gift. Lego Robotics Kits Lego has decided to discontinue its classic Mindstorms robotics kits, but they’ll be supported for another couple of years and this is your last chance to buy one. If you like Lego’s approach to robotics education but don’t want to invest in a system at the end of its life, Lego also makes an education kit called Spike that shares many of the hardware and software features for students in grades 6 to 8. $360–$385 Lego Sphero Indi Indi is a clever educational robot designed to teach problem solving and screenless coding to kids as young as 4, using a small wheeled robot with a color sensor and a system of colored strips that command the robot to do different behaviors. There’s also an app to access more options, and Sphero has more robots to choose from once your kid is ready for something more. $110 Sphero | Amazon Nybble and Bittle Petoi’s quadrupedal robot kits are an adorable (and relatively affordable) way to get started with legged robotics. Whether you go with Nybble the cat or Bittle the dog, you get to do some easy hardware assembly and then leverage a bunch of friendly software tools to get your little legged friend walking around and doing tricks. $220–$260 Petoi iRobot Root Root educational robots have a long and noble history, and iRobot has built on that to create an inexpensive platform to help kids learn to code starting as young as age 4. There are two different versions of Root; the more expensive one includes an RGB sensor, a programmable eraser, and the ability to stick to vertical whiteboards and move around on them. $100–$250 iRobot TurtleBot 4 The latest generation of TurtleBot from Clearpath, iRobot, and Open Robotics is a powerful and versatile ROS (Robot Operating System) platform for research and product development. For aspiring roboticists in undergrad and possibly high school, the Turtlebot 4 is just about as good as it gets unless you want to spend an order of magnitude more. And the fact that TurtleBots are used so extensively means that if you need some help, the ROS community will (hopefully) have your back. $1,200–$1,900 RoboShop iRobot Create 3 Newly updated just last year, iRobot's Create 3 is the perfect platform for folks who want to build their own robot, but not all of their own robot. The rugged mobile base is essentially a Roomba without the cleaning parts, and it's easy to add your own hardware on top. It runs ROS 2, but you can get started with Python. $300 iRobot Mini Pupper Mini Pupper is one of the cutest ways of getting started with ROS. This legged robot is open source, and runs ROS on a Raspberry Pi, which makes it extra affordable if you have your own board lying around. Even if you don’t, though, the Mini Pupper kit is super affordable for what you get, and is a fun hardware project if you decide to save a little extra cash by assembling it yourself. $400–$585 MangDang Luxonis Rae I’m not sure whether the world is ready for ROS 2 yet, but you can get there with Rae, which combines a pocket-size mobile robot with a pair of depth cameras and onboard computer shockingly cheaply. App support means that Rae can do cool stuff out of the box, but it’s easy to get more in-depth with it too. Rae will get delivered early next year, but it’s cool enough that we think a Kickstarter IOU is a perfectly acceptable gift. $400 Kickstarter Roomba Combo j7+ iRobot’s brand new top-of-the-line fully autonomous vacuuming and wet-mopping combo j7+ Roomba will get your floors clean and shiny, except for carpet, which it’s smart enough to not try to shine because it’ll cleverly lift the wet mop up out of the way. It’s also cloud connected and empties itself. You’ll have to put water in it if you want it to mop, but that’s way better than mopping yourself. $900 iRobot Neato D9 Neato’s robots might not be quite as pervasive as the Roomba, but they’re excellent vacuums, and they use a planar lidar system for obstacle avoidance and map making. The nice thing about lidar (besides the fact that it works in total darkness) is that Neato robots have no cameras at all and are physically incapable of collecting imagery of you or your home. $300 Neato Robotics Tertill How often do you find an affordable, useful, reliable, durable, fully autonomous home robot? Not often! But Tertill is all of these things: powered entirely by the sun, it slowly prowls around your garden, whacking weeds as they sprout while avoiding your mature plants. All you have to do is make sure it can’t escape, then just let it loose and forget about it for months at a time. $200 Tertill Amazon Astro If you like the idea of having a semi-autonomous mobile robot with a direct link to Amazon wandering around your house trying to be useful, then Amazon’s Astro might not sound like a terrible idea. You’ll have to apply for one, and it sounds like it’s more like a beta program, but could be fun, I guess? $1,000 Amazon Skydio 2+ The Skydio 2+ is an incremental (but significant) update to the Skydio 2 drone, with its magically cutting-edge obstacle avoidance and extremely impressive tracking skills. There are many drones out there that are cheaper and more portable, and if flying is your thing, get one of those. But if filming is your thing, the Skydio 2+ is the drone you want to fly. $900 Skydio DJI FPV We had a blast flying DJI’s FPV drone. The VR system is exhilarating and the drone is easy to fly even for FPV beginners, but it’s powerful enough to grow along with your piloting skills. Just don’t get cocky, or you’ll crash it. Don’t ask me how I know this. $900 DJI ElliQ ElliQ is an embodied voice assistant that is a lot more practical than a smart speaker. It's designed for older adults who may spend a lot of time alone at home, and can help with a bunch of things, including health and wellness tasks and communicating with friends and family. ElliQ costs $250 up front, plus a subscription of between $30 and $40 per month. $250+ ElliQ Moxie Not all robots for kids are designed to teach them to code: Moxie helps to “supports social-emotional development in kids through play.” The carefully designed and curated interaction between Moxie and children helps them to communicate and build social skills in a friendly and engaging way. Note that Moxie also requires a subscription fee of $40 per month. $800 Embodied Petit Qoobo What is Qoobo? It is “a tailed cushion that heals your heart,” according to the folks that make it. According to us, it’s a furry round pillow that responds to your touch by moving its tail, sort of like a single-purpose cat. It’s fuzzy tail therapy! $130 Qoobo | Amazon Unitree Go1 Before you decide on a real dog, consider the Unitree Go1 instead. Sure it’s expensive, but you know what? So are real dogs. And unlike with a real dog, you only have to walk the Go1 when you feel like it, and you can turn it off and stash it in a closet or under a bed whenever you like. For a fully featured dynamic legged robot, it’s staggeringly cheap, just keep in mind that shipping is $1,000. $2,700 Unitree
  • Delving for Joules in the Fusion Mines
    Nov 22, 2022 08:00 AM PST
    The Big Picture features technology through the lens of photographers. Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition. Enjoy the latest images, and if you have suggestions, leave a comment below. Shot of Nuclear Fusion An old saw regarding the multitude of dashed hopes about fusion energy’s promise goes “Fusion is 30 years away—and it always will be.” After decades of researchers predicting that fusion was just around the corner, a team at the UK Atomic Energy Authority (which hosts the Joint European Torus [JET] plasma physics experiment) did something that suggests scientists are homing in on exactly which corner that is. In February 2022, the JET experimenters induced the single greatest sustained energy pulse ever created by humans. It had twice the energy of the previous record-setting blast, triggered a quarter century earlier. A doubling every 25 years is far behind the pace of the microchip improvements described by Moore’s Law. But that hasn’t dampened enthusiasm over an alternative energy source that could make fossil fuels and their effect on the environment relics of a bygone era. In the foreground of the picture is a trainee learning how to use the systems involved in accomplishing the feat. Leon Neal/Getty Images Turning Drones into Scones What has two wings, can reach a person stranded in a disaster zone, and doubles as a source of precious calories when no other food is available? This drone, designed and built by a team of researchers at the Swiss Federal Institute of Technology Lausanne (EPFL), has wings made entirely of laser-cut rice cakes held together with “glue” made from gelatin. The EPFL group says it plans to keep refining the edible aircraft to improve its aeronautics and enhance its nutritional profile. EPFL Metasurface Weaves Entangled Photons Creating the quantum mechanical state of entanglement (in which paired atoms influence each other from across vast distances) has heretofore been reminiscent of the story of Noah’s ark. The tried-and-true method for entangling photons (by shining light through a nonlinear crystal) puts them in this state two by two, the way the animals are said to have boarded the ark. The ambition of quantum researchers has been to expand these connections from pairs to parties. And it seems they’ve figured out how to reliably entangle multiple photons in a complicated web, using half-millimeter-thick metasurfaces covered with forests of microscopic pillars. This, say experts, will not only greatly simplify the setup needed for quantum technology but also help support more-complex quantum applications. Craig Fritz Colossal Camera Coming to Chile In a world obsessed with miniaturization, it’s almost shocking when, every now and then, a big deal is made of something, er, big. That is certainly the case with the new camera being built for the Vera C. Rubin Observatory in Chile. When the camera is delivered and set up in May 2023, its 1.57-meter-wide lens will make it the world’s largest device for taking snapshots. The gargantuan point-and-shoot instrument will capture images of a swath of the sky seven times the width of the moon. Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory Bionic Hands Haven’t Fully Grasped Users’ Needs When we’re carrying out our quotidian activities, most of us rarely stop to think about what marvels of engineering our arms and hands are. But for those who have lost the use of a limb—or, like Britt Young, the woman pictured here, were born without one—there’s hardly ever a day when the challenges of navigating a two-handed world are not in the forefront of their thoughts. In Young’s October 2022 IEEE Spectrum cover story, she discusses these challenges, as well as how the bionic-hand technology intended to come to the rescue falls short of designers’ and users’ expectations. Gabriela Hasbun. Makeup: Maria Nguyen for Mac Cosmetics; Hair: Joan Laqui for Living Proof
  • The Women Behind ENIAC
    Nov 21, 2022 11:00 AM PST
    If you looked at the pictures of those working on the first programmable, general-purpose all-electronic computer, you would assume that J. Presper Eckert and John W. Mauchly were the only ones who had a hand in its development. Invented in 1945, the Electronic Numerical Integrator and Computer (ENIAC) was built to improve the accuracy of U.S. artillery during World War II. The two men and their team built the hardware. But hidden behind the scenes were six women—Jean Bartik, Kathleen Antonelli, Marlyn Meltzer, Betty Holberton, Frances Spence, and Ruth Teitelbaum—who programmed the computer to calculate artillery trajectories in seconds. The U.S. Army recruited the women in 1942 to work as so-called human computers—mathematicians who did calculations using a mechanical desktop calculator. For decades, the six women were largely unknown. But thanks to Kathy Kleiman, cofounder of ICANN (the Internet Corporation for Assigned Names and Numbers), the world is getting to know the ENIAC programmers’ contributions to computer science. This year Kleiman’s book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer was published. It delves into the women’s lives and the pioneering work they did. The book follows an award-winning documentary, The Computers: The Remarkable Story of the ENIAC Programmers, which Kleiman helped produce. It premiered at the 2014 Seattle International Film Festival and won Best Documentary Short at the 2016 U.N. Association Film Festival. Kleiman plans to give a presentation next year about the programmers as part of the IEEE Industry Hub Initiative’s Impact Speaker series. The initiative aims to introduce industry professionals and academics to IEEE and its offerings. Planning for the event, which is scheduled to be held in Silicon Valley, is underway. Details are to be announced before the end of the year. The Institute spoke with Kleiman, who teaches Internet technology and governance for lawyers at American University, in Washington, D.C., about her mission to publicize the programmers’ contributions. The interview has been condensed and edited for clarity. Kathy Kleiman delves into the ENIAC programmers’ lives and the pioneering work they did in her book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer.Kathy Kleiman The Institute: What inspired you to film the documentary? Kathy Kleiman: The ENIAC was a secret project of the U.S. Army during World War II. It was the first general-purpose, programmable, all-electronic computer—the key to the development of our smartphones, laptops, and tablets today. The ENIAC was a highly experimental computer, with 18,000 vacuums, and some of the leading technologists at the time didn’t think it would work, but it did. Six months after the war ended, the Army decided to reveal the existence of ENIAC and heavily publicize it. To do so, in February 1946 the Army took a lot of beautiful, formal photos of the computer and the team of engineers that developed it. I found these pictures while researching women in computer science as an undergraduate at Harvard. At the time, I knew of only two women in computer science: Ada Lovelace and then U.S. Navy Capt. Grace Hopper. [Lovelace was the first computer programmer; Hopper co-developed COBOL, one of the earliest standardized computer languages.] But I was sure there were more women programmers throughout history, so I went looking for them and found the images taken of the ENIAC. The pictures fascinated me because they had both men and women in them. Some of the photos had just women in front of the computer, but they weren’t named in any of the photos’ captions. I tracked them down after I found their identities, and four of six original ENIAC programmers responded. They were in their late 70s at the time, and over the course of many years they told me about their work during World War II and how they were recruited by the U.S. Army to be “human computers.” Eckert and Mauchly promised the U.S. Army that the ENIAC could calculate artillery trajectories in seconds rather than the hours it took to do the calculations by hand. But after they built the 2.5-meter-tall by 24-meter-long computer, they couldn’t get it to work. Out of approximately 100 human computers working for the U.S. Army during World War II, six women were chosen to write a program for the computer to run differential calculus equations. It was hard because the program was complex, memory was very limited, and the direct programming interface that connected the programmers to the ENIAC was hard to use. But the women succeeded. The trajectory program was a great success. But Bartik, McNulty, Meltzer, Snyder, Spence, and Teitelbaum’s contributions to the technology were never recognized. Leading technologists and the public never knew of their work. I was inspired by their story and wanted to share it. I raised funds, researched and recorded 20 hours of broadcast-quality oral histories with the ENIAC programmers—which eventually became the documentary. It allows others to see the women telling their story. “If we open the doors to history, I think it would make it a lot easier to recruit the wonderful people we are trying to urge to enter engineering, computer science, and related fields.” Why was the accomplishment of the six women important? Kleiman: The ENIAC is considered by many to have launched the information age. We generally think of women leaving the factory and farm jobs they held during World War II and giving them back to the men, but after ENIAC was completed, the six women continued to work for the U.S. Army. They helped world-class mathematicians program the ENIAC to complete “hundred-year problems” [problems that would take 100 years to solve by hand]. They also helped teach the next generation of ENIAC programmers, and some went on to create the foundations of modern programming. What influenced you to continue telling the ENIAC programmers’ story in your book? Kleiman: After my documentary premiered at the film festival, young women from tech companies who were in the audience came up to me to share why they were excited to learn the programmers’ story. They were excited to learn that women were an integral part of the history of early computing programming, and were inspired by their stories. Young men also came up to me and shared stories of their grandmothers and great-aunts who programmed computers in the 1960s and ’70s and inspired them to explore careers in computer science. I met more women and men like the ones in Seattle all over the world, so it seemed like a good idea to tell the full story along with its historical context and background information about the lives of the ENIAC programmers, specifically what happened to them after the computer was completed. What did you find most rewarding about sharing their story? Kleiman: It was wonderful and rewarding to get to know the ENIAC programmers. They were incredible, wonderful, warm, brilliant, and exceptional people. Talking to the people who created the programming was inspiring and helped me to see that I could work at the cutting edge too. I entered Internet law as one of the first attorneys in the field because of them. What I enjoy most is that the women’s experiences inspire young people today just as they inspired me when I was an undergraduate. Clockwise from top left: Jean Bartik, Kathleen Antonelli, Betty Holberton, Ruth Teitelbaum, Marlyn Meltzer, Frances Spence.Clockwise from top left: The Bartik Family; Bill Mauchly, Priscilla Holberton, Teitelbaum Family, Meltzer Family, Spence Family Is it important to highlight the contributions made throughout history by women in STEM? Kleiman: [Actor] Geena Davis founded the Geena Davis Institute on Gender in Media, which works collaboratively with the entertainment industry to dramatically increase the presence of female characters in media. It’s based on the philosophy of “you can’t be what you can’t see.” That philosophy is both right and wrong. I think you can be what you can’t see, and certainly every pioneer who has ever broken a racial, ethnic, religion, or gender barrier has done so. However, it’s certainly much easier to enter a field if there are role models who look like you. To that end, many computer scientists today are trying to diversify the field. Yet I know from my work in Internet policy and my recent travels across the country for my book tour that many students still feel locked out because of old stereotypes in computing and engineering. By sharing strong stories of pioneers in the fields who are women and people of color, I hope we can open the doors to computing and engineering. I hope history and herstory that is shared make it much easier to recruit young people to join engineering, computer science, and related fields. Are you planning on writing more books or producing another documentary? Kleiman: I would like to continue the story of the ENIAC programmers and write about what happened to them after the war ended. I hope that my next book will delve into the 1950s and uncover more about the history of the Universal Automatic Computer, the first modern commercial computer series, and the diverse group of people who built and programmed it.
  • The U.S.-China Chip Ban, Explained
    Nov 21, 2022 09:28 AM PST
    It has now been over a month since the U.S. Commerce Department issued new rules that clamped down on the export of certain advanced chips—which have military or AI applications—to Chinese customers. China has yet to respond—but Beijing has multiple options in its arsenal. It’s unlikely, experts say, that the U.S. actions will be the last fighting word in an industry that is becoming more geopolitically sensitive by the day. This is not the first time that the U.S. government has constrained the flow of chips to its perceived adversaries. Previously, the United States has blocked chip sales to individual Chinese customers. In response to the Russian invasion of Ukraine earlier this year, the United States (along with several other countries, including South Korea and Taiwan) placed Russia under a chip embargo. But none of these prior U.S. chip bans were as broad as the new rules, issued on 7 October. “This announcement is perhaps the most expansive export control in decades,” says Sujai Shivakumar, an analyst at the Center for International and Strategic Studies, in Washington. The rules prohibit the sale, to Chinese customers, of advanced chips with both high performance (at least 300 trillion operations per second, or 300 teraops) and fast interconnect speed (generally, at least 600 gigabytes per second). Nvidia’s A100, for comparison, is capable of over 600 teraops and matches the 600 Gb/s interconnect speed. Nvidia’s more-impressive H100 can reach nearly 4,000 trillion operations and 900 Gb/s. Both chips, intended for data centers and AI trainers, cannot be sold to Chinese customers under the new rules. Additionally, the rules restrict the sale of fabrication equipment if it will knowingly be used to make certain classes of advanced logic or memory chips. This includes logic chips produced at nodes of 16 nanometers or less (which the likes of Intel, Samsung, and TSMC have done since the early 2010s); NAND long-term memory integrated circuits with at least 128 layers (the state of the art today); or DRAM short-term memory integrated circuits produced at 18 nanometers or less (which Samsung began making in 2016). Chinese chipmakers have barely scratched the surface of those numbers. SMIC switched on 14-nm mass production this year, despite facing existing U.S. sanctions. YMTC started shipping 128-layer NAND chips last year. The rules restrict not just U.S. companies, but citizens and permanent residents as well. U.S. employees at Chinese semiconductor firms have had to pack up. ASML, a Dutch maker of fabrication equipment, has told U.S. employees to stop servicing Chinese customers. Speaking of Chinese customers, most—including offices, gamers, designers of smaller chips—probably won’t feel the controls. “Most chip trade and chip production in China is unimpacted,” says Christopher Miller, a historian who studies the semiconductor trade at Tufts University. The controlled sorts of chips instead go into supercomputers and large data centers, and they’re desirable for training and running large machine-learning models. Most of all, the United States hopes to stop Beijing from using chips to enhance its military—and potentially preempt an invasion of Taiwan, where the vast majority of the world’s semiconductors and microprocessors are produced. In order to seal off one potential bypass, the controls also apply to non-U.S. firms that rely on U.S.-made equipment or software. For instance, Taiwanese or South Korean chipmakers can’t sell Chinese customers advanced chips that are fabricated with U.S.-made technology. It’s possible to apply to the U.S. government for an exemption from at least some of the restrictions. Taiwanese fab juggernaut TSMC and South Korean chipmaker SK Hynix, for instance, have already acquired temporary exemptions—for a year. “What happens after that is difficult to say,” says Patrick Schröder, a researcher at Chatham House in London. And the Commerce Department has already stated that such licenses will be the exception, not the rule (although Commerce Department undersecretary Alan Estevez suggested that around two-thirds of licenses get approved). More export controls may be en route. Estevez indicated that the government is considering placing restrictions on technologies in other sensitive fields—specifically mentioning quantum information science and biotechnology, both of which have seen China-based researchers forge major progress in the past decade. The Chinese government has so far retorted with harsh words and little action. “We don’t know whether their response will be an immediate reaction or whether they have a longer-term approach to dealing with this,” says Shivakumar. “It’s speculation at this point.” Beijing could work with foreign companies whose revenue in the lucrative Chinese market is now under threat. “I’m really not aware of a particular company that thinks it’s coming out a winner in this,” says Shivakumar. This week, in the eastern city of Hefei, the Chinese government hosted a chipmakers’ conference whose attendees included U.S. firms AMD, Intel, and Qualcomm. Nvidia has already responded by introducing a China-specific chip, the A800, which appears to be a modified A100 cut down to meet the requirements. Analysts say that Nvidia’s approach could be a model for other companies to keep up Chinese sales. There may be other tools the Chinese government can exploit. While China may be dependent on foreign semiconductors, foreign electronics manufacturers are in turn dependent on China for rare-earth metals—and China supplies the supermajority of the world’s rare earths. There is precedent for China curtailing its rare-earth supply for geopolitical leverage. In 2010, a Chinese fishing boat collided with two Japanese Coast Guard vessels, triggering an international incident when Japanese authorities arrested the boat’s captain. In response, the Chinese government cut off rare-earth exports to Japan for several months. Certainly, much of the conversation has focused on the U.S. action and the Chinese reaction. But for third parties, the entire dispute delivers constant reminders of just how tense and volatile the chip supply can be. In the European Union, home to less than 10 percent of the world’s microchips market, the debate has bolstered interest in the prospective European Chips Act, a plan to heavily invest in fabrication in Europe. “For Europe in particular, it’s important not to get caught up in this U.S.-China trade issue,” Schröder says. “The way in which the semiconductor industry has evolved over the past few decades has predicated on a relatively stable geopolitical order,” says Shivakumar. “Obviously, the ground realities have shifted.”
  • The Transistor of 2047: Expert Predictions
    Nov 21, 2022 08:00 AM PST
    The 100th anniversary of the invention of the transistor will happen in 2047. What will transistors be like then? Will they even be the critical computing element they are today? IEEE Spectrum asked experts from around the world for their predictions. What will transistors be like in 2047? This article is part of our special report on the 75th anniversary of the invention of the transistor. Expect transistors to be even more varied than they are now, says one expert. Just as processors have evolved from CPUs to include GPUs, network processors, AI accelerators, and other specialized computing chips, transistors will evolve to fit a variety of purposes. “Device technology will become application domain–specific in the same way that computing architecture has become application domain–specific,” says H.-S. Philip Wong, an IEEE Fellow, professor of electrical engineering at Stanford University, and former vice president of corporate research at TSMC. Despite the variety, the fundamental operating principle—the field effect that switches transistors on and off—will likely remain the same, suggests Suman Datta, an IEEE Fellow, professor of electrical and computer at Georgia Tech, and director of the multi-university nanotech research center ASCENT. This device will likely have minimum critical dimensions of 1 nanometer or less, enabling device densities of 10 trillion per square centimeter, says Tsu-Jae King Liu, an IEEE Fellow, dean of the college of engineering at the University of California, Berkeley, and a member of Intel’s board of directors. “It is safe to assume that the transistor or switch architectures of 2047 have already been demonstrated on a lab scale”—Sri Samavedam Experts seem to agree that the transistor of 2047 will need new materials and probably a stacked or 3D architecture, expanding on the planned complementary field-effect transistor (CFET, or 3D-stacked CMOS). [For more on the CFET, see "Taking Moore's Law to New Heights."] And the transistor channel, which now runs parallel to the plane of the silicon, may need to become vertical in order to continue to increase in density, says Datta. AMD senior fellow Richard Schultz, suggests that the main aim in developing these new devices will be power. “The focus will be on reducing power and the need for advanced cooling solutions,” he says. “Significant focus on devices that work at lower voltages is required.” Will transistors still be the heart of most computing in 25 years? It’s hard to imagine a world where computing is not done with transistors, but, of course, vacuum tubes were once the digital switch of choice. Startup funding for quantum computing, which does not directly rely on transistors, reached US $1.4 billion in 2021, according to McKinsey & Co. But advances in quantum computing won’t happen fast enough to challenge the transistor by 2047, experts in electron devices say. “Transistors will remain the most important computing element,” says Sayeef Salahuddin, an IEEE Fellow and professor of electrical engineering and computer science at the University of California, Berkeley. “Currently, even with an ideal quantum computer, the potential areas of application seem to be rather limited compared to classical computers.” Sri Samavedam, senior vice president of CMOS technologies at the European chip R&D center Imec, agrees. “Transistors will still be very important computing elements for a majority of the general-purpose compute applications,” says Samavedam. “One cannot ignore the efficiencies realized from decades of continuous optimization of transistors.” Has the transistor of 2047 already been invented? Twenty-five years is a long time, but in the world of semiconductor R&D, it’s not that long. “In this industry, it usually takes about 20 years from [demonstrating a concept] to introduction into manufacturing,” says Samavedam. “It is safe to assume that the transistor or switch architectures of 2047 have already been demonstrated on a lab scale” even if the materials involved won’t be exactly the same. King Liu, who demonstrated the modern FinFET about 25 years ago with colleagues at Berkeley, agrees. But the idea that the transistor of 2047 is already sitting in a lab somewhere isn’t universally shared. Salahuddin, for one, doesn’t think it’s been invented yet. “But just like the FinFET in the 1990s, it is possible to make a reasonable prediction for the geometric structure” of future transistors, he says. AMD’s Schultz says you can glimpse this structure in proposed 3D-stacked devices made of 2D semiconductors or carbon-based semiconductors. “Device materials that have not yet been invented could also be in scope in this time frame,” he adds. Will silicon still be the active part of most transistors in 2047? Experts say that the heart of most devices, the transistor channel region, will still be silicon, or possibly silicon-germanium—which is already making inroads—or germanium. But in 2047 many chips may use semiconductors that are considered exotic today. These could include oxide semiconductors like indium gallium zinc oxide; 2D semiconductors, such as the metal dichalcogenide tungsten disulfide; and one-dimensional semiconductors, such as carbon nanotubes. Or even “others yet to be invented,” says Imec’s Samavedam. “Transistors will remain the most important computing element”—Sayeef Salahuddin Silicon-based chips may be integrated in the same package with chips that rely on newer materials, just as processor makers are today integrating chips using different silicon manufacturing technologies into the same package, notes IEEE Fellow Gabriel Loh, a senior fellow at AMD. Which semiconductor material is at the heart of the device may not even be the central issue in 2047. “The choice of channel material will essentially be dictated by which material is the most compatible with many other materials that form other parts of the device,” says Salahuddin. And we know a lot about integrating materials with silicon. In 2047, where will transistors be common where they are not found today? Everywhere. No, seriously. Experts really do expect some amount of intelligence and sensing to creep into every aspect of our lives. That means devices will be attached to our bodies and implanted inside them; embedded in all kinds of infrastructure, including roads, walls, and houses; woven into our clothing; stuck to our food; swaying in the breeze in grain fields; watching just about every step in every supply chain; and doing many other things in places nobody has thought of yet. Transistors will be “everywhere that needs computation, command and control, communications, data collection, storage and analysis, intelligence, sensing and actuation, interaction with humans, or an entrance portal to the virtual and mixed reality world,” sums up Stanford’s Wong. This article appears in the December 2022 print issue as “The Transistor of 2047.” The Transistor at 75 The Transistor at 75 The past, present, and future of the modern world’s most important invention How the First Transistor Worked Even its inventors didn’t fully understand the point-contact transistor The Ultimate Transistor Timeline The transistor’s amazing evolution from point contacts to quantum tunnels The State of the Transistor in 3 Charts In 75 years, it’s become tiny, mighty, ubiquitous, and just plain weird 3D-Stacked CMOS Takes Moore’s Law to New Heights When transistors can’t get any smaller, the only direction is up The Transistor of 2047: Expert Predictions What will the device be like on its 100th anniversary? The Future of the Transistor Is Our Future Nothing but better devices can tackle humanity’s growing challenges John Bardeen’s Terrific Transistorized Music Box This simple gadget showed off the magic of the first transistor
  • How the Graphical User Interface Was Invented
    Nov 20, 2022 12:00 PM PST
    Mice, windows, icons, and menus: these are the ingredients of computer interfaces designed to be easy to grasp, simplicity itself to use, and straightforward to describe. The mouse is a pointer. Windows divide up the screen. Icons symbolize application programs and data. Menus list choices of action. But the development of today’s graphical user interface was anything but simple. It took some 30 years of effort by engineers and computer scientists in universities, government laboratories, and corporate research groups, piggybacking on each other’s work, trying new ideas, repeating each other’s mistakes. This article was first published as “Of Mice and menus: designing the user-friendly interface.” It appeared in the September 1989 issue of IEEE Spectrum. A PDF version is available on IEEE Xplore. The photographs and diagrams appeared in the original print version. Throughout the 1970s and early 1980s, many of the early concepts for windows, menus, icons, and mice were arduously researched at Xerox Corp.’s Palo Alto Research Center (PARC), Palo Alto, Calif. In 1973, PARC developed the prototype Alto, the first of two computers that would prove seminal in this area. More than 1200 Altos were built and tested. From the Alto’s concepts, starting in 1975, Xerox’s System Development Department then developed the Star and introduced it in 1981—the first such user-friendly machine sold to the public. In 1984, the low-cost Macintosh from Apple Computer Inc., Cupertino, Calif., brought the friendly interface to thousands of personal computer users. During the next five years, the price of RAM chips fell enough to accommodate the huge memory demands of bit-mapped graphics, and the Mac was followed by dozens of similar interfaces for PCs and workstations of all kinds. By now, application programmers are becoming familiar with the idea of manipulating graphic objects. The Mac’s success during the 1980s spurred Apple Computer to pursue legal action over ownership of many features of the graphical user interface. Suits now being litigated could assign those innovations not to the designers and their companies, but to those who first filed for legal protection on them. The GUI started with Sketchpad The grandfather of the graphical user interface was Sketchpad [see photograph]. Massachusetts Institute of Technology student Ivan E. Sutherland built it in 1962 as a Ph.D. thesis at MIT’s Lincoln Laboratory in Lexington, Mass. Sketchpad users could not only draw points, line segments, and circular arcs on a cathode ray tube (CRT) with a light pen—they could also assign constraints to, and relationships among, whatever they drew. Arcs could have a specified diameter, lines could be horizontal or vertical, and figures could be built up from combinations of elements and shapes. Figures could be moved, copied, shrunk, expanded, and rotated, with their constraints (shown as onscreen icons) dynamically preserved. At a time when a CRT monitor was a novelty in itself, the idea that users could interactively create objects by drawing on a computer was revolutionary. Moreover, to zoom in on objects, Sutherland wrote the first window-drawing program, which required him to come up with the first clipping algorithm. Clipping is a software routine that calculates which part of a graphic object is to be displayed and displays only that part on the screen. The program must calculate where a line is to be drawn, compare that position to the coordinates of the window in use, and prevent the display of any line segment whose coordinates fall outside the window. Though films of Sketchpad in operation were widely shown in the computer research community, Sutherland says today that there was little immediate fallout from the project. Running on MIT’s TX-2 mainframe, it demanded too much computing power to be practical for individual use. Many other engineers, however, see Sketchpad’s design and algorithms as a primary influence on an entire generation of research into user interfaces. The origin of the computer mouse The light pens used to select areas of the screen by interactive computer systems of the 1950s and 1960s—including Sketchpad—had drawbacks. To do the pointing, the user’s arm had to be lifted up from the table, and after a while that got tiring. Picking up the pen required fumbling around on the table or, if it had a holder, taking the time after making a selection to put it back. Sensing an object with a light pen was straightforward: the computer displayed spots of light on the screen and interrogated the pen as to whether it sensed a spot, so the program always knew just what was being displayed. Locating the position of the pen on the screen required more sophisticated techniques—like displaying a cross pattern of nine points on the screen, then moving the cross until it centered on the light pen. In 1964, Douglas Engelbart, a research project leader at SRI International in Menlo Park, Calif., tested all the commercially available pointing devices, from the still-popular light pen to a joystick and a Graphicon (a curve-tracing device that used a pen mounted on the arm of a potentiometer). But he felt the selection failed to cover the full spectrum of possible pointing devices, and somehow he should fill in the blanks. Then he remembered a 1940s college class he had taken that covered the use of a planimeter to calculate area. (A planimeter has two arms, with a wheel on each. The wheels can roll only along their axes; when one of them rolls, the other must slide.) If a potentiometer were attached to each wheel to monitor its rotation, he thought, a planimeter could be used as a pointing device. Engelbart explained his roughly sketched idea to engineer William English, who with the help of the SRI machine shop built what they quickly dubbed “the mouse.” This first mouse was big because it used single-turn potentiometers: one rotation of the wheels had to be scaled to move a cursor from one side of the screen to the other. But it was simple to interface with the computer: the processor just read frequent samples of the potentiometer positioning signals through analog-to-digital converters. The cursor moved by the mouse was easy to locate, since readings from the potentiometer determined the position of the cursor on the screen-unlike the light pen. But programmers for later windowing systems found that the software necessary to determine which object the mouse had selected was more complex than that for the light pen: they had to compare the mouse’s position with that of all the objects displayed onscreen. The computer mouse gets redesigned—and redesigned again Engelbart’s group at SRI ran controlled experiments with mice and other pointing devices, and the mouse won hands down. People adapted to it quickly, it was easy to grab, and it stayed where they put it. Still, Engelbart wanted to tinker with it. After experimenting, his group had concluded that the proper ratio of cursor movement to mouse movement was about 2:1, but he wanted to try varying that ratio—decreasing it at slow speeds and raising it at fast speeds—to improve user control of fine movements and speed up larger movements. Some modern mouse-control software incorporates this idea, including that of the Macintosh. The mouse, still experimental at this stage, did not change until 1971. Several members of Engelbart’s group had moved to the newly established PARC, where many other researchers had seen the SRI mouse and the test report. They decided there was no need to repeat the tests; any experimental systems they designed would use mice. Said English, “This was my second chance to build a mouse; it was obvious that it should be a lot smaller, and that it should be digital.” Chuck Thacker, then a member of the research staff, advised PARC to hire inventor Jack Hawley to build it. Hawley decided the mouse should use shaft encoders, which measure position by a series of pulses, instead of potentiometers (both were covered in Engelbart’s 1970 patent), to eliminate the expensive analog-to-digital converters. The basic principle, of one wheel rolling while the other slid, was licensed from SRI. The ball mouse was the “easiest patent I ever got. It took me five minutes to think of, half an hour to describe to the attorney, and I was done.” —Ron Rider In 1972, the mouse changed again. Ron Rider, now vice president of systems architecture at PARC but then a new arrival, said he was using the wheel mouse while an engineer made excuses for its asymmetric operation (one wheel dragging while one turned). “I suggested that they turn a trackball upside down, make it small, and use it as a mouse instead,” Rider told IEEE Spectrum. This device came to be known as the ball mouse. “Easiest patent I ever got,” Rider said. “It took me five minutes to think of, half an hour to describe to the attorney, and I was done.” Defining terms Bit map The pixel pattern that makes up the graphic display on a computer screen. Clicking The motion of pressing a mouse button to Initiate an action by software; some actions require double-clicking. Graphical user interface (GUI) The combination of windowing displays, menus, icons, and a mouse that is increasingly used on personal computers and workstations. Icon An onscreen drawing that represents programs or data. Menu A list of command options currently available to the computer user; some stay onscreen, while pop-up or pull-down menus are requested by the user. Mouse A device whose motion across a desktop or other surface causes an on-screen cursor to move commensurately; today’s mice move on a ball and have one, two, or three buttons. Raster display A cathode ray tube on which Images are displayed as patterns of dots, scanned onto the screen sequentially in a predetermined pattern of lines. Vector display A cathode ray tube whose gun scans lines, or vectors, onto the screen phosphor. Window An area of a computer display, usually one of several, in which a particular program is executing. In the PARC ball mouse design, the weight of the mouse is transferred to the ball by a swivel device and on one or two casters at the end of the mouse farthest from the wire “tail.” A prototype was built by Xerox’s Electronics Division in El Segundo, Calif., then redesigned by Hawley. The rolling ball turned two perpendicular shafts, with a drum on the end of each that was coated with alternating stripes of conductive and nonconductive material. As the drum turned, the stripes transmitted electrical impulses through metal wipers. When Apple Computer decided in 1979 to design a mouse for its Lisa computer, the design mutated yet again. Instead of a metal ball held against the substrate by a swivel, Apple used a rubber ball whose traction depended on the friction of the rubber and the weight of the ball itself. Simple pads on the bottom of the case carried the weight, and optical scanners detected the motion of the internal wheels. The device had loose tolerances and few moving parts, so that it cost perhaps a quarter as much to build as previous ball mice. How the computer mouse gained and lost buttons The first, wooden, SRI mouse had only one button, to test the concept. The plastic batch of SRI mice bad three side-by-side buttons—all there was room for, Engelbart said. The first PARC mouse bad a column of three buttons-again, because that best fit the mechanical design. Today, the Apple mouse has one button, while the rest have two or three. The issue is no longer 1950—a standard 6-by-10-cm mouse could now have dozens of buttons—but human factors, and the experts have strong opinions. Said English, now director of internationalization at Sun Microsystems Inc., Mountain View, Calif.: “Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.” He sees two buttons as the minimum because two functions are basic to selecting an object: pointing to its start, then extending the motion to the end of the object. William Verplank, a human factors specialist in the group that tested the graphical interface at Xerox from 1978 into the early 1980s, concurred. He told Spectrum that with three buttons, Alto users forgot which button did what. The group’s tests showed that one button was also confusing, because it required actions such as double-clicking to select and then open a file. “We have agonizing videos of naive users struggling” with these problems, Verplank said. They concluded that for most users, two buttons (as used on the Star) are optimal, if a button means the same thing in every application. English experimented with one-button mice at PARC before concluding they were a bad idea. “Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.” —William English But many interface designers dislike multiple buttons, saying that double-clicking a single button to select an item is easier than remembering which button points and which extends. Larry Tesler, formerly a computer scientist at PARC, brought the one-button mouse to Apple, where he is now vice president of advanced technology. The company’s rationale is that to attract novices to its computers one button was as simple as it could get. More than two million one-button Apple mice are now in use. The Xerox and Microsoft two-button mice are less common than either Apple’s ubiquitous one-button model or the three-button mice found on technical workstations. Dozens of companies manufacture mice today; most are slightly smaller than a pack of cigarettes, with minor variations in shape. How windows first came to the computer screen In 1962, Sketchpad could split its screen horizontally into two independent sections. One section could, for example, give a close-up view of the object in the other section. Researchers call Sketchpad the first example of tiled windows, which are laid out side by side. They differ from overlapping windows, which can be stacked on top of each other, or overlaid, obscuring all or part of the lower layers. Windows were an obvious means of adding functionality to a small screen. In 1969, Engelbart equipped NLS (as the On-Line System he invented at SRI during the 1960s was known, to distinguish it from the Off-Line System known as FLS) with windows. They split the screen into multiple parts horizontally or vertically, and introduced cross-window editing with a mouse. By 1972, led by researcher Alan Kay, the Smalltalk programming language group at Xerox PARC had implemented their version of windows. They were working with far different technology from Sutherland or Engelbart: by deciding that their images had to be displayed as dots on the screen, they led a move from vector to raster displays, to make it simple to map the assigned memory location of each of those spots. This was the bit map invented at PARC, and made viable during the 1980s by continual performance improvements in processor logic and memory speed. Experimenting with bit-map manipulation, Smalltalk researcher Dan Ingalls developed the bit-block transfer procedure, known as BitBlt. The BitBlt software enabled application programs to mix and manipulate rectangular arrays of pixel values in on-screen or off-screen memory, or between the two, combining the pixel values and storing the result in the appropriate bit-map location. BitBlt made it much easier to write programs to scroll a window (move an image through it), resize (enlarge or contract) it, and drag windows (move them from one location to another on screen). It led Kay to create overlapping windows. They were soon implemented by the Smalltalk group, but made clipping harder. Some researchers question whether overlapping windows offer more benefits than tiled on the grounds that screens with overlapping windows become so messy the user gets lost. In a tiling system, explained researcher Peter Deutsch, who worked with the Smalltalk group, the clipping borders are simply horizontal or vertical lines from one screen border to another, and software just tracks the location of those lines. But overlapping windows may appear anywhere on the screen, randomly obscuring bits and pieces of other windows, so that quite irregular regions must be clipped. Thus application software must constantly track which portions of their windows remain visible. Some researchers still question whether overlapping windows offer more benefits than tiled, at least above a certain screen size, on the grounds that screens with overlapping windows become so messy the user gets lost. Others argue that overlapping windows more closely match users’ work patterns, since no one arranges the papers on their physical desktop in neat horizontal and vertical rows. Among software engineers, however, overlapping windows seem to have won for the user interface world. So has the cut-and-paste editing model that Larry Tesler developed, first for the Gypsy text editor he wrote at PARC and later for Apple. Charles Irby—who worked on Xerox’s windows and is now vice president of development at Metaphor Computer Systems Inc., Mountain View, Calif.—noted, however, that cut-and-paste worked better for pure text-editing than for moving graphic objects from one application to another. The origin of the computer menu bar Menus—functions continuously listed onscreen that could be called into action with key combinations—were commonly used in defense computing by the 1960s. But it was only with the advent of BitBlt and windows that menus could be made to appear as needed and to disappear after use. Combined with a pointing device to indicate a user’s selection, they are now an integral part of the user-friendly interface: users no longer need to refer to manuals or memorize available options. Instead, the choices can be called up at a moment’s notice whenever needed. And menu design has evolved. Some new systems use nested hierarchies of menus; others offer different menu versions—one with the most commonly used commands for novices, another with all available commands for the experienced user. Among the first to test menus on demand was PARC researcher William Newman, in a program called Markup. Hard on his heels, the Smalltalk group built in pop-up menus that appeared on screen at the cursor site when the user pressed one of the mouse buttons. Implementation was on the whole straightforward, recalled Deutsch. The one exception was determining whether the menu or the application should keep track of the information temporarily obscured by the menu. In the Smalltalk 76 version, the popup menu saved and restored the screen bits it overwrote. But in today’s multitasking systems, that would not work, because an application may change those bits without the menu’s knowledge. Such systems add another layer to the operating system: a display manager that tracks what is written where. The production Xerox Star, in 1981, featured a further advance: a menu bar, essentially a row of words indicating available menus that could be popped up for each window. Human factors engineer Verplank recalled that the bar was at first located at the bottom of its window. But the Star team found users were more likely to associate a bar with the window below it, so it was moved to the top of its window. Apple simplified things in its Lisa and Macintosh with a single bar placed at the top of the screen. This menu bar relates only to the window in use: the menus could be ‘‘pulled down” from the bar, to appear below it. Designer William D. Atkinson received a patent (assigned to Apple Computer) in August 1984 for this innovation. One new addition that most user interface pioneers consider an advantage is the tear-off menu, which the user can move to a convenient spot on the screen and “pin” there, always visible for ready access. Many windowing interfaces now offer command-key or keyboard alternatives for many commands as well. This return to the earliest of user interfaces—key combinations—neatly supplements menus, providing both ease of use for novices and for the less experienced, and speed for those who can type faster than they can point to a menu and click on a selection. How the computer “icon” got its name Sketchpad had on-screen graphic objects that represented constraints (for example, a rule that lines be the same length), and the Flex machine built in 1967 at the University of Utah by students Alan Kay and Ed Cheadle had squares that represented programs and data (like today’s computer “folders”). Early work on icons was also done by Bell Northern Research, Ottawa, Canada, stemming from efforts to replace the recently legislated bilingual signs with graphic symbols. But the concept of the computer “icon” was not formalized until 1975. David Canfield Smith, a computer science graduate student at Stanford University in California, began work on his Ph.D. thesis in 1973. His advisor was PARC’s Kay, who suggested that he look at using the graphics power of the experimental Alto not just to display text, but rather to help people program. David Canfield Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents. Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents: a Russian icon of a saint is holy and is to be venerated. Smith’s computer icons contained all the properties of the programs and data represented, and therefore could be linked or acted on as if they were the real thing. After receiving his Ph.D. in 1975, Smith joined Xerox in 1976 to work on Star development. The first thing he did, he said, was to recast his concept of icons in office terms. “I looked around my office and saw papers, folders, file cabinets, a telephone, and bookshelves, and it was an easy translation to icons,” he said. Xerox researchers developed, tested, and revised icons for the Star interface for three years before the first version was complete. At first they attempted to make the icons look like a detailed photographic rendering of the object, recalled Irby, who worked on testing and refining the Xerox windows. Trading off label space, legibility, and the number of icons that fit on the screen, they decided to constrain icons to a 1-inch (2.5-centimeter) square of 64 by 64 pixels, or 512 eight-bit bytes. Then, Verplank recalls, they discovered that because of a background pattern based on two-pixel dots, the right-hand side of the icons appeared jagged. So they increased the width of the icons to 65 pixels, despite an outcry from programmers who liked the neat 16-bit breakdown. But the increase stuck, Verplank said, because they had already decided to store 72 bits per side to allow for white space around each icon. After settling on a size for the icons, the Star developers tested four sets developed by two graphic designers and two software engineers. They discovered that, for example, resizing may cause problems. They shrunk the icon for a person—a head and shoulders—in order to use several of them to represent a group, only to hear one test subject say the screen resolution made the reduced icon look like a cross above a tombstone. Computer graphics artist Norm Cox, now of Cox & Hall, Dallas, Texas, was finally hired to redesign the icons. Icon designers today still wrestle with the need to make icons adaptable to the many different system configurations offered by computer makers. Artist Karen Elliott, who has designed icons for Microsoft, Apple, Hewlett-Packard Co., and others, noted that on different systems an icon may be displayed in different colors, several resolutions, and a variety of gray shades, and it may also be inverted (light and dark areas reversed). In the past few years, another concern has been added to icon designers’ tasks: internationalization. Icons designed in the United States often lack space for translations into languages other than English. Elliott therefore tries to leave space for both the longer words and the vertical orientation of some languages. The main rule is to make icons simple, clean, and easily recognizable. Discarded objects are placed in a trash can on the Macintosh. On the NeXT Computer System, from NeXT Inc., Palo Alto, Calif.—the company formed by Apple cofounder Steven Jobs after he left Apple—they are dumped into a Black Hole. Elliott sees NeXT’s black hole as one of the best icons ever designed: ”It is distinct; its roundness stands out from the other, square icons, and this is important on a crowded display. It fits my image of information being sucked away, and it makes it clear that dumping something is serious. English disagrees vehemently. The black hole “is fundamentally wrong,” he said. “You can dig paper out of a wastebasket, but you can’t dig it out of a black hole.” Another critic called the black hole familiar only to “computer nerds who read mostly science fiction and comics,” not to general users. With the introduction of the Xerox Star in June 1981, the graphical user interface, as it is known today, arrived on the market. Though not a commercial triumph, the Star generated great interest among computer users, as the Alto before it had within the universe of computer designers. Even before the Star was introduced, Jobs, then still at Apple, had visited Xerox PARC in November 1979 and asked the Smalltalk researchers dozens of questions about the Alto’s internal design. He later recruited Larry Tesler from Xerox to design the user interface of the Apple Lisa. With the Lisa and then the Macintosh, introduced in January 1983 and January 1984 respectively, the graphical user interface reached the low-cost, high-volume computer market. At almost $10,000, buyers deemed the Lisa too expensive for the office market. But aided by prizewinning advertising and its lower price, the Macintosh took the world by storm. Early Macs had only 128K bytes of RAM, which made them slow to respond because it was too little memory for heavy graphic manipulation. Also, the time needed for programmers to learn its Toolbox of graphics routines delayed application packages until well into 1985. But the Mac’s ease of use was indisputable, and it generated interest that spilled over into the MS-DOS world of IBM PCs and clones, as well as Unix-based workstations. Who owns the graphical user interface? The widespread acceptance of such interfaces, however, has led to bitter lawsuits to establish exactly who owns what. So far, none of several litigious companies has definitively established that it owns the software that implements windows, icons, or early versions of menus. But the suits continue. Virtually all the companies that make and sell either wheel or ball mice paid license fees to SRI or to Xerox for their patents. Engelbart recalled that SRI patent attorneys inspected all the early work on the interface, but understood only hardware. After looking at developments like the implementation of windows, they told him that none of it was patentable. At Xerox, the Star development team proposed 12 patents having to do with the user interface. The company’s patent committee rejected all but two on hardware—one on BitBlt, the other on the Star architecture. At the time, Charles Irby said, it was a good decision. Patenting required full disclosure, and no precedents then existed for winning software patent suits. The most recent and most publicized suit was filed in March 1988, by Apple, against both Microsoft and Hewlett-Packard Co., Palo Alto, Calif. Apple alleges that HP’s New Wave interface, requiring version 2.03 of Microsoft’s Windows program, embodies the copyrighted “audio visual computer display” of the Macintosh without permission; that the displays of Windows 2.03 are illegal copies of the Mac’s audiovisual works; and that Windows 2.03 also exceeds the rights granted in a November 198S agreement in which Microsoft acknowledged that the displays in Windows 1.0 were derivatives of those in Apple’s Lisa and Mac. In March 1989, U.S. District Judge William W. Schwarzer ruled Microsoft had exceeded the bounds of its license in creating Windows 2.03. Then in July 1989 Schwarzer ruled that all but 11 of the 260 items that Apple cited in its suit were, in fact, acceptable under the 1985 agreement. The larger issue—whether Apple’s copyrights are valid, and whether Microsoft and HP infringed on them—will not now be examined until 1990. Among those 11 are overlapping windows and movable icons. According to Pamela Samuelson, a noted software intellectual property expert and visiting professor at Emory University Law School, Atlanta, Ga., many experts would regard both as functional features of an interface that cannot be copyrighted, rather than “expressions” of an idea protectable by copyright. But lawyers for Apple—and for other companies that have filed lawsuits to protect the “look and feel’’ of their screen displays—maintain that if such protection is not granted, companies will lose the economic incentive to market technological innovations. How is Apple to protect its investment in developing the Lisa and Macintosh, they argue, if it cannot license its innovations to companies that want to take advantage of them? If the Apple-Microsoft case does go to trial on the copyright issues, Samuelson said, the court may have to consider whether Apple can assert copyright protection for overlapping windows-an interface feature on which patents have also been granted. In April 1989, for example, Quarterdeck Office Systems Inc., Santa Monica, Calif., received a patent for a multiple windowing system in its Desq system software, introduced in 1984. Adding fuel to the legal fire, Xerox said in May 1989 it would ask for license fees from companies that use the graphical user interface. But it is unclear whether Xerox has an adequate claim to either copyright or patent protection for the early graphical interface work done at PARC. Xerox did obtain design patents on later icons, noted human factors engineer Verplank. Meanwhile, both Metaphor and Sun Microsystems have negotiated licenses with Xerox for their own interfaces. To Probe Further The September 1989 IEEE Computer contains an article, “The Xerox ‘Star’: A Retrospective,” by Jeff Johnson et al., covering development of the Star. “Designing the Star User Interface,’’ [PDF] by David C. Smith et al., appeared in the April 1982 issue of Byte. The Sept. 12, 1989, PC Magazine contains six articles on graphical user interfaces for personal computers and workstations. The July 1989 Byte includes ‘‘A Guide to [Graphical User Interfaces),” by Frank Hayes and Nick Baran, which describes 12 current interfaces for workstations and personal computers. “The Interface of Tomorrow, Today,’’ by Howard Reingold, in the July 10, 1989, InfoWorld does the same. “The interface that launched a thousand imitations,” by Richard Rawles, in the March 21, 1989, MacWeek covers the Macintosh interface. The human factors of user interface design are discussed in The Psychology of Everyday Things, by Donald A. Norman (Basic Books Inc., New York, 1988). The January 1989 IEEE Software contains several articles on methods, techniques, and tools for designing and implementing graphical interfaces. The Way Things Work, by David Macaulay (Houghton Mifflin Co., Boston, 1988), contains a detailed drawing of a ball mouse. The October 1985 IEEE Spectrum covered Xerox PARC’s history in “Research at Xerox PARC: a founder’s assessment,” by George Pake (pp. 54-61) and “Inside the PARC: the ‘information architects,’“ by Tekla Perry and Paul Wallich (pp. 62-75). William Atkinson received patent no. 4,464,652 for the pulldown menu system on Aug. 8, 1984, and assigned it to Apple. Gary Pope received patent no. 4,823,108, for an improved system for displaying images in “windows” on a computer screen, on April 18, 1989, and assigned it to Quarterdeck Office Systems. The wheel mouse patent, no. 3,541,541, “X-Y position indicator for a display system,” was issued to Douglas Engelbart on Nov. 17, 1970, and assigned to SRI International. The ball mouse patent, no. 3,835,464, was issued to Ronald Rider on Sept. 10, 1974, and assigned to Xerox. The first selection device tests to include a mouse are covered in “Display-Selection Techniques for Text Manipulation,” by William English, Douglas Engelbart, and Melvyn Berman, in IEEE Transactions on Human Factors in Electronics, March 1967. Sketchpad: A Man-Machine Graphical Communication System, by Ivan E. Sutherland (Garland Publishing Inc., New York City and London, 1980), reprints his 1963 Ph.D. thesis.
  • How the First Transistor Worked
    Nov 20, 2022 08:00 AM PST
    The vacuum-tube triode wasn’t quite 20 years old when physicists began trying to create its successor, and the stakes were huge. Not only had the triode made long-distance telephony and movie sound possible, it was driving the entire enterprise of commercial radio, an industry worth more than a billion dollars in 1929. But vacuum tubes were power-hungry and fragile. If a more rugged, reliable, and efficient alternative to the triode could be found, the rewards would be immense. The goal was a three-terminal device made out of semiconductors that would accept a low-current signal into an input terminal and use it to control the flow of a larger current flowing between two other terminals, thereby amplifying the original signal. The underlying principle of such a device would be something called the field effect—the ability of electric fields to modulate the electrical conductivity of semiconductor materials. The field effect was already well known in those days, thanks to diodes and related research on semiconductors. This article is part of our special report on the 75th anniversary of the invention of the transistor. But building such a device had proved an insurmountable challenge to some of the world’s top physicists for more than two decades. Patents for transistor-like devices had been filed starting in 1925, but the first recorded instance of a working transistor was the legendary point-contact device built at AT&T Bell Telephone Laboratories in the fall of 1947. Though the point-contact transistor was the most important invention of the 20th century, there exists, surprisingly, no clear, complete, and authoritative account of how the thing actually worked. Modern, more robust junction and planar transistors rely on the physics in the bulk of a semiconductor, rather than the surface effects exploited in the first transistor. And relatively little attention has been paid to this gap in scholarship. In the cutaway photo of a point-contact, two thin conductors are visible; these connect to the points that make contact with a tiny slab of germanium. One of these points is the emitter and the other is the collector. A third contact, the base, is attached to the reverse side of the germanium.AT&T ARCHIVES AND HISTORY CENTER It was an ungainly looking assemblage of germanium, plastic, and gold foil, all topped by a squiggly spring. Its inventors were a soft-spoken Midwestern theoretician, John Bardeen, and a voluble and “ somewhat volatile” experimentalist, Walter Brattain. Both were working under William Shockley, a relationship that would later prove contentious. In November 1947, Bardeen and Brattain were stymied by a simple problem. In the germanium semiconductor they were using, a surface layer of electrons seemed to be blocking an applied electric field, preventing it from penetrating the semiconductor and modulating the flow of current. No modulation, no signal amplification. Sometime late in 1947 they hit on a solution. It featured two pieces of barely separated gold foil gently pushed by that squiggly spring into the surface of a small slab of germanium. Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate. Indeed, the current edition of that bible of undergraduate EEs, The Art of Electronics by Horowitz and Hill, makes no mention of the point-contact transistor at all, glossing over its existence by erroneously stating that the junction transistor was a “Nobel Prize-winning invention in 1947.” But the transistor that was invented in 1947 was the point-contact; the junction transistor was invented by Shockley in 1948. So it seems appropriate somehow that the most comprehensive explanation of the point-contact transistor is contained within John Bardeen’s lecture for that Nobel Prize, in 1956. Even so, reading it gives you the sense that a few fine details probably eluded even the inventors themselves. “A lot of people were confused by the point-contact transistor,” says Thomas Misa, former director of the Charles Babbage Institute for the History of Science and Technology, at the University of Minnesota. Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate. A year after Bardeen’s lecture, R. D. Middlebrook, a professor of electrical engineering at Caltech who would go on to do pioneering work in power electronics, wrote: “Because of the three-dimensional nature of the device, theoretical analysis is difficult and the internal operation is, in fact, not yet completely understood.” Nevertheless, and with the benefit of 75 years of semiconductor theory, here we go. The point-contact transistor was built around a thumb-size slab of n-type germanium, which has an excess of negatively charged electrons. This slab was treated to produce a very thin surface layer that was p-type, meaning it had an excess of positive charges. These positive charges are known as holes. They are actually localized deficiencies of electrons that move among the atoms of the semiconductor very much as a real particle would. An electrically grounded electrode was attached to the bottom of this slab, creating the base of the transistor. The two strips of gold foil touching the surface formed two more electrodes, known as the emitter and the collector. That’s the setup. In operation, a small positive voltage—just a fraction of a volt—is applied to the emitter, while a much larger negative voltage—4 to 40 volts—is applied to the collector, all with reference to the grounded base. The interface between the p-type layer and the n-type slab created a junction just like the one found in a diode: Essentially, the junction is a barrier that allows current to flow easily in only one direction, toward lower voltage. So current could flow from the positive emitter across the barrier, while no current could flow across that barrier into the collector. The Western Electric Type-2 point-contact transistor was the first transistor to be manufactured in large quantities, in 1951, at Western Electric’s plant in Allentown, Pa. By 1960, when this photo was taken, the plant had switched to producing junction transistors.AT&T ARCHIVES AND HISTORY CENTER Now, let’s look at what happens down among the atoms. First, we’ll disconnect the collector and see what happens around the emitter without it. The emitter injects positive charges—holes—into the p-type layer, and they begin moving toward the base. But they don’t make a beeline toward it. The thin layer forces them to spread out laterally for some distance before passing through the barrier into the n-type slab. Think about slowly pouring a small amount of fine powder onto the surface of water. The powder eventually sinks, but first it spreads out in a rough circle. Now we connect the collector. Even though it can’t draw current by itself through the barrier of the p-n junction, its large negative voltage and pointed shape do result in a concentrated electric field that penetrates the germanium. Because the collector is so close to the emitter, and is also negatively charged, it begins sucking up many of the holes that are spreading out from the emitter. This charge flow results in a concentration of holes near the p-n barrier underneath the collector. This concentration effectively lowers the “height” of the barrier that would otherwise prevent current from flowing between the collector and the base. With the barrier lowered, current starts flowing from the base into the collector—much more current than what the emitter is putting into the transistor. The amount of current depends on the height of the barrier. Small decreases or increases in the emitter’s voltage cause the barrier to fluctuate up and down, respectively. Thus very small changes in the the emitter current control very large changes at the collector, so voilà! Amplification. (EEs will notice that the functions of base and emitter are reversed compared with those in later transistors, where the base, not the emitter, controls the response of the transistor.) Ungainly and fragile though it was, it was a semiconductor amplifier, and its progeny would change the world. And its inventors knew it. The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil, with that tiny slit separating the emitter and collector contacts. This configuration gave reliable power gain, and the duo knew then that they had succeeded. In his carpool home that night, Brattain told his companions he’d just done “the most important experiment that I’d ever do in my life” and swore them to secrecy. The taciturn Bardeen, too, couldn’t resist sharing the news. As his wife, Jane, prepared dinner that night, he reportedly said, simply, “We discovered something today.” With their children scampering around the kitchen, she responded, “That’s nice, dear.” It was a transistor, at last, but it was pretty rickety. The inventors later hit on the idea of electrically forming the collector by passing large currents through it during the transistor’s manufacturing. This technique enabled them to get somewhat larger current flows that weren’t so tightly confined within the surface layer. The electrical forming was a bit hit-or-miss, though. “They would just throw out the ones that didn’t work,” Misa notes. Nevertheless, point-contact transistors went into production at many companies, under license to AT&T, and, in 1951, at AT&T’s own manufacturing arm, Western Electric. They were used in hearing aids, oscillators, telephone-routing gear, in an experimental TV receiver built at RCA, and in the Tradic, the first airborne digital computer, among other systems. In fact, point-contact transistors remained in production until 1966, in part due to their superior speed compared with the alternatives. The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil… The Bell Labs group wasn’t alone in its successful pursuit of a transistor. In Aulnay-sous-Bois, a suburb northeast of Paris, two German physicists, Herbert Mataré and Heinrich Welker, were also trying to build a three-terminal semiconductor amplifier. Working for a French subsidiary of Westinghouse, they were following up on very intriguing observations Mataré had made while developing germanium and silicon rectifiers for the German military in 1944. The two succeeded in creating a reliable point-contact transistor in June 1948. They were astounded, a week or so later, when Bell Labs finally revealed the news of its own transistor, at a press conference on 30 June 1948. Though they were developed completely independently, and in secret, the two devices were more or less identical. Here the story of the transistor takes a weird turn, breathtaking in its brilliance and also disturbing in its details. Bardeen’s and Brattain’s boss, William Shockley, was furious that his name was not included with Bardeen’s and Brattain’s on the original patent application for the transistor. He was convinced that Bardeen and Brattain had merely spun his theories about using fields in semiconductors into their working device, and had failed to give him sufficient credit. Yet in 1945, Shockley had built a transistor based on those very theories, and it hadn’t worked. In 1953, RCA engineer Gerald Herzog led a team that designed and built the first “all-transistor” television (although, yes, it had a cathode-ray tube). The team used point-contact transistors produced by RCA under a license from Bell Labs. TRANSISTOR MUSEUM JERRY HERZOG ORAL HISTORY At the end of December, barely two weeks after the initial success of the point-contact transistor, Shockley traveled to Chicago for the annual meeting of the American Physical Society. On New Year’s Eve, holed up in his hotel room and fueled by a potent mix of jealousy and indignation, he began designing a transistor of his own. In three days he scribbled some 30 pages of notes. By the end of the month, he had the basic design for what would become known as the bipolar junction transistor, or BJT, which would eventually supersede the point-contact transistor and reign as the dominant transistor until the late 1970s. With insights gleaned from the Bell Labs work, RCA began developing its own point-contact transistors in 1948. The group included the seven shown here—four of which were used in RCA’s experimental, 22-transistor television set built in 1953. These four were the TA153 [top row, second from left], the TA165 [top, far right], the TA156 [bottom row, middle] and the TA172 [bottom, right].TRANSISTOR MUSEUM JONATHAN HOPPE COLLECTION The BJT was based on Shockley’s conviction that charges could, and should, flow through the bulk semiconductors rather than through a thin layer on their surface. The device consisted of three semiconductor layers, like a sandwich: an emitter, a base in the middle, and a collector. They were alternately doped, so there were two versions: n-type/p-type/n-type, called “NPN,” and p-type/n-type/p-type, called “PNP.” The BJT relies on essentially the same principles as the point-contact, but it uses two p-n junctions instead of one. When used as an amplifier, a positive voltage applied to the base allows a small current to flow between it and the emitter, which in turn controls a large current between the collector and emitter. Consider an NPN device. The base is p-type, so it has excess holes. But it is very thin and lightly doped, so there are relatively few holes. A tiny fraction of the electrons flowing in combines with these holes and are removed from circulation, while the vast majority (more than 97 percent) of electrons keep flowing through the thin base and into the collector, setting up a strong current flow. But those few electrons that do combine with holes must be drained from the base in order to maintain the p-type nature of the base and the strong flow of current through it. That removal of the “trapped” electrons is accomplished by a relatively small flow of current through the base. That trickle of current enables the much stronger flow of current into the collector, and then out of the collector and into the collector circuit. So, in effect, the small base current is controlling the larger collector circuit. Electric fields come into play, but they do not modulate the current flow, which the early theoreticians thought would have to happen for such a device to function. Here’s the gist: Both of the p-n junctions in a BJT are straddled by depletion regions, in which electrons and holes combine and there are relatively few mobile charge carriers. Voltage applied across the junctions sets up electric fields at each, which push charges across those regions. These fields enable electrons to flow all the way from the emitter, across the base, and into the collector. In the BJT, “the applied electric fields affect the carrier density, but because that effect is exponential, it only takes a little bit to create a lot of diffusion current,” explains Ioannis “John” Kymissis, chair of the department of electrical engineering at Columbia University. The very first transistors were a type known as point contact, because they relied on metal contacts touching the surface of a semiconductor. They ramped up output current—labeled “Collector current” in the top diagram—by using an applied voltage to overcome a barrier to charge flow. Small changes to the input, or “emitter,” current modulate this barrier, thus controlling the output current. The bipolar junction transistor accomplishes amplification using much the same principles but with two semiconductor interfaces, or junctions, rather than one. As with the point-contact transistor, an applied voltage overcomes a barrier and enables current flow that is modulated by a smaller input current. In particular, the semiconductor junctions are straddled by depletion regions, across which the charge carriers diffuse under the influence of an electric field.Chris Philpot The BJT was more rugged and reliable than the point-contact transistor, and those features primed it for greatness. But it took a while for that to become obvious. The BJT was the technology used to make integrated circuits, from the first ones in the early 1960s all the way until the late 1970s, when metal-oxide-semiconductor field-effect transistors (MOSFETs) took over. In fact, it was these field-effect transistors, first the junction field-effect transistor and then MOSFETs, that finally realized the decades-old dream of a three-terminal semiconductor device whose operation was based on the field effect—Shockley’s original ambition. Such a glorious future could scarcely be imagined in the early 1950s, when AT&T and others were struggling to come up with practical and efficient ways to manufacture the new BJTs. Shockley himself went on to literally put the silicon into Silicon Valley. He moved to Palo Alto and in 1956 founded a company that led the switch from germanium to silicon as the electronic semiconductor of choice. Employees from his company would go on to found Fairchild Semiconductor, and then Intel. Later in his life, after losing his company because of his terrible management, he became a professor at Stanford and began promulgating ungrounded and unhinged theories about race, genetics, and intelligence. In 1951 Bardeen left Bell Labs to become a professor at the University of Illinois at Urbana-Champaign, where he won a second Nobel Prize for physics, for a theory of superconductivity. (He is the only person to have won two Nobel Prizes in physics.) Brattain stayed at Bell Labs until 1967, when he joined the faculty at Whitman College, in Walla Walla, Wash. Shockley died a largely friendless pariah in 1989. But his transistor would change the world, though it was still not clear as late as 1953 that the BJT would be the future. In an interview that year, Donald G. Fink, who would go on to help establish the IEEE a decade later, mused, “Is it a pimpled adolescent, now awkward, but promising future vigor? Or has it arrived at maturity, full of languor, surrounded by disappointments?” It was the former, and all of our lives are so much the better because of it. This article appears in the December 2022 print issue as “The First Transistor and How it Worked .” The Transistor at 75 The Transistor at 75 The past, present, and future of the modern world’s most important invention How the First Transistor Worked Even its inventors didn’t fully understand the point-contact transistor The Ultimate Transistor Timeline The transistor’s amazing evolution from point contacts to quantum tunnels The State of the Transistor in 3 Charts In 75 years, it’s become tiny, mighty, ubiquitous, and just plain weird 3D-Stacked CMOS Takes Moore’s Law to New Heights When transistors can’t get any smaller, the only direction is up The Transistor of 2047: Expert Predictions What will the device be like on its 100th anniversary? The Future of the Transistor Is Our Future Nothing but better devices can tackle humanity’s growing challenges John Bardeen’s Terrific Transistorized Music Box This simple gadget showed off the magic of the first transistor
  • The EV Transition Explained: Battery Challenges
    Nov 19, 2022 11:30 AM PST
    “Energy and information are two basic currencies of organic and social systems,” the economics Nobelist Herb Simon once observed. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.” Electric vehicles at scale alter the terms of both basic currencies concurrently. Reliable, secure supplies of minerals and software are core elements for EVs, which represent a “shift from a fuel-intensive to a material-intensive energy system,” according to a report by the International Energy Agency (IEA). For example, the mineral requirements for an EV’s batteries and electric motors are six times that of an internal-combustion-engine (ICE) vehicle, which can increase the average weight of an EV by 340 kilograms (750 pounds). For something like the Ford Lightning, the weight can be more than twice that amount. The EV Transition Explained This is the second in a series of articles exploring the major technological and social challenges that must be addressed as we move from vehicles with internal-combustion engines to electric vehicles at scale. In reviewing each article, readers should bear in mind Nobel Prize–winning physicist Richard Feynman’s admonition: “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.” EVs also create a shift from an electromechanical-intensive to an information-intensive vehicle. EVs offer a virtual clean slate from which to accelerate the design of safe, software-defined vehicles, with computing and supporting electronics being the prime enabler of a vehicle’s features, functions, and value. Software also allows for the decoupling of the internal mechanical connections needed in an ICE vehicle, permitting an EV to be controlled remotely or autonomously. An added benefit is that the loss of the ICE power train not only reduces the components a vehicle requires but also frees up space for increased passenger comfort and storage. The effects of Simon’s profound changes are readily apparent, forcing a 120-year-old industry to fundamentally reinvent itself. EVs require automakers to design new manufacturing processes and build plants to make both EVs and their batteries. Ramping up the battery supply chain is the automakers’ current “ most challenging topic,” according to VW chief financial officer Arno Antlitz. It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years. These plants are also very expensive. Ford and its Korean battery supplier SK Innovation are spending US $5.6 billion to produce F-Series EVs and batteries in Stanton, Tenn., for example, while GM is spending $2 billion to produce its new Cadillac Lyriq EVs in Spring Hill, Tenn. As automakers expand their lines of EVs, tens of billions more will need to be invested in both manufacturing and battery plants. It is little wonder that Tesla CEO Elon Musk calls EV factories “gigantic money furnaces.” Furthermore, Kristin Dziczek a policy analyst with the Federal Reserve Bank of Chicago adds, there are scores of new global EV competitors actively seeking to replace the legacy automakers. The “simplicity” of EVs in comparison with ICE vehicles allows these disruptors to compete virtually from scratch with legacy automakers, not only in the car market itself but for the material and labor inputs as well. Batteries and the supply-chain challenge Another critical question is whether all the planned battery-plant output will support expected EV production demands. For instance, the United States will require 8 million EV batteries annually by 2030 if its target to make EVs half of all new-vehicle sales is met, with that number rising each year after. As IEA executive director Fatih Birol observes, “Today, the data shows a looming mismatch between the world’s strengthened climate ambitions and the availability of critical minerals that are essential to realizing those ambitions.” This mismatch worries automakers. GM, Ford, Tesla, and others have moved to secure batteries through 2025, but it could be tricky after that. Rivian Automotive chief executive RJ Scaringe was recently quoted in the Wall Street Journal as saying that “90 to 95 percent of the (battery) supply chain does not exist,” and that the current semiconductor chip shortage is “a small appetizer to what we are about to feel on battery cells over the next two decades.” The competition for securing raw materials, along with the increased consumer demand, has caused EV prices to spike. Ford has raised the price of the Lightning $6,000 to $8,500, and CEO Jim Farley bluntly states that in regard to material shortages in the foreseeable future, “I don’t think we should be confident in any other outcomes than an increase in prices.” Stiff Competition for Engineering Talent One critical area of resource competition is over the limited supply of software and systems engineers with the mechatronics and robotics expertise needed for EVs. Major automakers have moved aggressively to bring more software and systems-engineering expertise on board, rather than have it reside at their suppliers, as they have traditionally done. Automakers feel that if they’re not in control of the software, they’re not in control of their product. Volvo’s CEO Jim Rowan stated earlier this year that increasing the computing power in EVs will be harder and more altering of the automotive industry than switching from ICE vehicles to EVs. This means that EV winners and losers will in great part be separated by their “relative strength in their cyberphysical systems engineering,” states Clemson’s Paredis. Even for the large auto suppliers, the transition to EVs will not be an easy road. For instance, automakers are demanding these suppliers absorb more cost cuts because automakers are finding EVs so expensive to build. Not only do automakers want to bring cutting-edge software expertise in-house, they want greater inside expertise in critical EV supply-chain components, especially batteries. Automakers, including Tesla, are all scrambling for battery talent, with bidding wars reportedly breaking out to acquire top candidates. With automakers planning to spend more than $13 billion to build at least 13 new EV battery plants in North America within the next five to seven years, experienced management and production-line talent will likely be in extremely short supply. Tesla’s Texas Gigafactory needs some 10,000 workers alone, for example. With at least 60 new battery plants planned to be in operation globally by 2030, and scores needed soon afterward, major battery makers are already highlighting their expected skill shortages. The underlying reason for the worry: Supplying sufficient raw materials to existing and planned battery plants as well as to the manufacturers of other renewable energy sources and military systems—who are competing for the same materials—has several complications to overcome. Among them is the need for more mines to provide the metals required, which have spiked in price as demand has increased. For example, while demand for lithium is growing rapidly, investment in mines has significantly lagged the investment that has been aimed toward EVs and battery plants. It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years. Mining the raw materials, of course, assumes that there is sufficient refining capability to process them, which, outside of China, is limited. This is especially true in the United States, which, according to a Biden Administration special supply-chain investigative report, has “limited raw material production capacity and virtually no processing capacity.” Consequently, the report states, the United States “exports the limited raw materials produced today to foreign markets.” For example, output from the only nickel mine in the United States, the Eagle mine in Minnesota, is sent to Canada for smelting. “Energy and information are two basic currencies of organic and social systems. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.” —Herb Simon One possible solution is to move away from lithium-ion batteries and nickel metal hydride batteries to other battery chemistries such as lithium-iron phosphate, lithium-ion phosphate, lithium-sulfur, lithium-metal, and sodium-ion, among many others, not to mention solid-state batteries, as a way to alleviate some of the material supply and cost problems. Tesla is moving toward the use of lithium-iron phosphate batteries, as is Ford for some of its vehicles. These batteries are cobalt free, which alleviates several sourcing issues. Another solution may be recycling both EV batteries as well as the waste and rejects from battery manufacturing, which can run between 5 to 10 percent of production. Effective recycling of EV batteries “has the potential to reduce primary demand compared to total demand in 2040, by approximately 25 percent for lithium, 35 percent for cobalt and nickel, and 55 percent for copper,” according to a report by the University of Sidney’s Institute for Sustainable Futures. While investments into creating EV battery recycling facilities have started, there is a looming question of whether there will be enough battery factory scrap and other lithium-ion battery waste for them to remain operational while they wait for sufficient numbers of batteries to make them profitable. Lithium-ion battery-pack recycling is very time-consuming and expensive, making mining lithium often cheaper than recycling it, for example. Recycling low or no-cobalt lithium batteries, which is the direction many automakers are taking, may also make it unprofitable to recycle them. An additional concern is that EV batteries, once no longer useful for propelling the EV, have years of life left in them. They can be refurbished, rebuilt, and reused in EVs, or repurposed into storage devices for homes, businesses, or the grid. Whether it will make economic sense to do either at scale versus recycling them remains to be seen. Howard Nusbaum, the administrator of the National Salvage Vehicle Reporting Program (NSVRP), succinctly puts it, “There is no recycling, and no EV-recycling industry, if there is no economic basis for one.” In the next article in the series, we will look at whether the grid can handle tens of millions of EVs.
  • Are You Ready for Workplace Brain Scanning?
    Nov 19, 2022 08:00 AM PST
    Get ready: Neurotechnology is coming to the workplace. Neural sensors are now reliable and affordable enough to support commercial pilot projects that extract productivity-enhancing data from workers’ brains. These projects aren’t confined to specialized workplaces; they’re also happening in offices, factories, farms, and airports. The companies and people behind these neurotech devices are certain that they will improve our lives. But there are serious questions about whether work should be organized around certain functions of the brain, rather than the person as a whole. To be clear, the kind of neurotech that’s currently available is nowhere close to reading minds. Sensors detect electrical activity across different areas of the brain, and the patterns in that activity can be broadly correlated with different feelings or physiological responses, such as stress, focus, or a reaction to external stimuli. These data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. Two of the most interesting innovators in this field are the Israel-based startup InnerEye, which aims to give workers superhuman abilities, and Emotiv, a Silicon Valley neurotech company that’s bringing a brain-tracking wearable to office workers, including those working remotely. The fundamental technology that these companies rely on is not new: Electroencephalography (EEG) has been around for about a century, and it’s commonly used today in both medicine and neuroscience research. For those applications, the subject may have up to 256 electrodes attached to their scalp with conductive gel to record electrical signals from neurons in different parts of the brain. More electrodes, or “channels,” mean that doctors and scientists can get better spatial resolution in their readouts—they can better tell which neurons are associated with which electrical signals. What is new is that EEG has recently broken out of clinics and labs and has entered the consumer marketplace. This move has been driven by a new class of “dry” electrodes that can operate without conductive gel, a substantial reduction in the number of electrodes necessary to collect useful data, and advances in artificial intelligence that make it far easier to interpret the data. Some EEG headsets are even available directly to consumers for a few hundred dollars. While the public may not have gotten the memo, experts say the neurotechnology is mature and ready for commercial applications. “This is not sci-fi,” says James Giordano, chief of neuroethics studies at Georgetown University Medical Center. “This is quite real.” How InnerEye’s TSA-boosting technology works InnerEye Security Screening Demo youtu.be In an office in Herzliya, Israel, Sergey Vaisman sits in front of a computer. He’s relaxed but focused, silent and unmoving, and not at all distracted by the seven-channel EEG headset he’s wearing. On the computer screen, images rapidly appear and disappear, one after another. At a rate of three images per second, it’s just possible to tell that they come from an airport X-ray scanner. It’s essentially impossible to see anything beyond fleeting impressions of ghostly bags and their contents. “Our brain is an amazing machine,” Vaisman tells us as the stream of images ends. The screen now shows an album of selected X-ray images that were just flagged by Vaisman’s brain, most of which are now revealed to have hidden firearms. No one can knowingly identify and flag firearms among the jumbled contents of bags when three images are flitting by every second, but Vaisman’s brain has no problem doing so behind the scenes, with no action required on his part. The brain processes visual imagery very quickly. According to Vaisman, the decision-making process to determine whether there’s a gun in complex images like these takes just 300 milliseconds. Brain data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. What takes much more time are the cognitive and motor processes that occur after the decision making—planning a response (such as saying something or pushing a button) and then executing that response. If you can skip these planning and execution phases and instead use EEG to directly access the output of the brain’s visual processing and decision-making systems, you can perform image-recognition tasks far faster. The user no longer has to actively think: For an expert, just that fleeting first impression is enough for their brain to make an accurate determination of what’s in the image. InnerEye’s image-classification system operates at high speed by providing a shortcut to the brain of an expert human. As an expert focuses on a continuous stream of images (from three to 10 images per second, depending on complexity), a commercial EEG system combined with InnerEye’s software can distinguish the characteristic response the expert’s brain produces when it recognizes a target. In this example, the target is a weapon in an X-ray image of a suitcase, representing an airport-security application.Chris Philpot Vaisman is the vice president of R&D of InnerEye, an Israel-based startup that recently came out of stealth mode. InnerEye uses deep learning to classify EEG signals into responses that indicate “targets” and “nontargets.” Targets can be anything that a trained human brain can recognize. In addition to developing security screening, InnerEye has worked with doctors to detect tumors in medical images, with farmers to identify diseased plants, and with manufacturing experts to spot product defects. For simple cases, InnerEye has found that our brains can handle image recognition at rates of up to 10 images per second. And, Vaisman says, the company’s system produces results just as accurate as a human would when recognizing and tagging images manually—InnerEye is merely using EEG as a shortcut to that person’s brain to drastically speed up the process. While using the InnerEye technology doesn’t require active decision making, it does require training and focus. Users must be experts at the task, well trained in identifying a given type of target, whether that’s firearms or tumors. They must also pay close attention to what they’re seeing—they can’t just zone out and let images flash past. InnerEye’s system measures focus very accurately, and if the user blinks or stops concentrating momentarily, the system detects it and shows the missed images again. Can you spot the manufacturing defects? Examine the sample images below, and then try to spot the target among the nontargets. Ten images are displayed every second for five seconds on loop. There are three targets. Can you spot the weapon? Three images are displayed every second for five seconds on loop. There is one weapon. InnerEye Having a human brain in the loop is especially important for classifying data that may be open to interpretation. For example, a well-trained image classifier may be able to determine with reasonable accuracy whether an X-ray image of a suitcase shows a gun, but if you want to determine whether that X-ray image shows something else that’s vaguely suspicious, you need human experience. People are capable of detecting something unusual even if they don’t know quite what it is. “We can see that uncertainty in the brain waves,” says InnerEye founder and chief technology officer Amir Geva. “We know when they aren’t sure.” Humans have a unique ability to recognize and contextualize novelty, a substantial advantage that InnerEye’s system has over AI image classifiers. InnerEye then feeds that nuance back into its AI models. “When a human isn’t sure, we can teach AI systems to be not sure, which is better training than teaching the AI system just one or zero,” says Geva. “There is a need to combine human expertise with AI.” InnerEye’s system enables this combination, as every image can be classified by both computer vision and a human brain. Using InnerEye’s system is a positive experience for its users, the company claims. “When we start working with new users, the first experience is a bit overwhelming,” Vaisman says. “But in one or two sessions, people get used to it, and they start to like it.” Geva says some users do find it challenging to maintain constant focus throughout a session, which lasts up to 20 minutes, but once they get used to working at three images per second, even two images per second feels “too slow.” In a security-screening application, three images per second is approximately an order of magnitude faster than an expert can manually achieve. InnerEye says their system allows far fewer humans to handle far more data, with just two human experts redundantly overseeing 15 security scanners at once, supported by an AI image-recognition system that is being trained at the same time, using the output from the humans’ brains. InnerEye is currently partnering with a handful of airports around the world on pilot projects. And it’s not the only company working to bring neurotech into the workplace. How Emotiv’s brain-tracking technology works Emotiv’s MN8 earbuds collect two channels of EEG brain data. The earbuds can also be used for phone calls and music. Emotiv When it comes to neural monitoring for productivity and well-being in the workplace, the San Francisco–based company Emotiv is leading the charge. Since its founding 11 years ago, Emotiv has released three models of lightweight brain-scanning headsets. Until now the company had mainly sold its hardware to neuroscientists, with a sideline business aimed at developers of brain-controlled apps or games. Emotiv started advertising its technology as an enterprise solution only this year, when it released its fourth model, the MN8 system, which tucks brain-scanning sensors into a pair of discreet Bluetooth earbuds. Tan Le, Emotiv’s CEO and cofounder, sees neurotech as the next trend in wearables, a way for people to get objective “brain metrics” of mental states, enabling them to track and understand their cognitive and mental well-being. “I think it’s reasonable to imagine that five years from now this [brain tracking] will be quite ubiquitous,” she says. When a company uses the MN8 system, workers get insight into their individual levels of focus and stress, and managers get aggregated and anonymous data about their teams. The Emotiv Experience The Emotiv Experience Chris Philpot Emotiv’s MN8 system uses earbuds to capture two channels of EEG data, from which the company’s proprietary algorithms derive performance metrics for attention and cognitive stress. It’s very difficult to draw conclusions from raw EEG signals [top], especially with only two channels of data. The MN8 system relies on machine-learning models that Emotiv developed using a decade’s worth of data from its earlier headsets, which have more electrodes. To determine a worker’s level of attention and cognitive stress, the MN8 system uses a variety of analyses. One shown here [middle, bar graphs] reveals increased activity in the low-frequency ranges (theta and alpha) when a worker’s attention is high and cognitive stress is low; when the worker has low attention and high stress, there’s more activity in the higher-frequency ranges (beta and gamma). This analysis and many others feed into the models that present simplified metrics of attention and cognitive stress [bottom] to the worker. Emotiv launched its enterprise technology into a world that is fiercely debating the future of the workplace. Workers are feuding with their employers about return-to-office plans following the pandemic, and companies are increasingly using “ bossware” to keep tabs on employees—whether staffers or gig workers, working in the office or remotely. Le says Emotiv is aware of these trends and is carefully considering which companies to work with as it debuts its new gear. “The dystopian potential of this technology is not lost on us,” she says. “So we are very cognizant of choosing partners that want to introduce this technology in a responsible way—they have to have a genuine desire to help and empower employees,” she says. Lee Daniels, a consultant who works for the global real estate services company JLL, has spoken with a lot of C-suite executives lately. “They’re worried,” says Daniels. “There aren’t as many people coming back to the office as originally anticipated—the hybrid model is here to stay, and it’s highly complex.” Executives come to Daniels asking how to manage a hybrid workforce. “This is where the neuroscience comes in,” he says. Emotiv has partnered with JLL, which has begun to use the MN8 earbuds to help its clients collect “true scientific data,” Daniels says, about workers’ attention, distraction, and stress, and how those factors influence both productivity and well-being. Daniels says JLL is currently helping its clients run short-term experiments using the MN8 system to track workers’ responses to new collaboration tools and various work settings; for example, employers could compare the productivity of in-office and remote workers. “The dystopian potential of this technology is not lost on us.” —Tan Le, Emotiv CEO Emotiv CTO Geoff Mackellar believes the new MN8 system will succeed because of its convenient and comfortable form factor: The multipurpose earbuds also let the user listen to music and answer phone calls. The downside of earbuds is that they provide only two channels of brain data. When the company first considered this project, Mackellar says, his engineering team looked at the rich data set they’d collected from Emotiv’s other headsets over the past decade. The company boasts that academics have conducted more than 4,000 studies using Emotiv tech. From that trove of data—from headsets with 5, 14, or 32 channels—Emotiv isolated the data from the two channels the earbuds could pick up. “Obviously, there’s less information in the two sensors, but we were able to extract quite a lot of things that were very relevant,” Mackellar says. Once the Emotiv engineers had a hardware prototype, they had volunteers wear the earbuds and a 14-channel headset at the same time. By recording data from the two systems in unison, the engineers trained a machine-learning algorithm to identify the signatures of attention and cognitive stress from the relatively sparse MN8 data. The brain signals associated with attention and stress have been well studied, Mackellar says, and are relatively easy to track. Although everyday activities such as talking and moving around also register on EEG, the Emotiv software filters out those artifacts. The app that’s paired with the MN8 earbuds doesn’t display raw EEG data. Instead, it processes that data and shows workers two simple metrics relating to their individual performance. One squiggly line shows the rise and fall of workers’ attention to their tasks—the degree of focus and the dips that come when they switch tasks or get distracted—while another line represents their cognitive stress. Although short periods of stress can be motivating, too much for too long can erode productivity and well-being. The MN8 system will therefore sometimes suggest that the worker take a break. Workers can run their own experiments to see what kind of break activity best restores their mood and focus—maybe taking a walk, or getting a cup of coffee, or chatting with a colleague. What neuroethicists think about neurotech in the workplace While MN8 users can easily access data from their own brains, employers don’t see individual workers’ brain data. Instead, they receive aggregated data to get a sense of a team or department’s attention and stress levels. With that data, companies can see, for example, on which days and at which times of day their workers are most productive, or how a big announcement affects the overall level of worker stress. Emotiv emphasizes the importance of anonymizing the data to protect individual privacy and prevent people from being promoted or fired based on their brain metrics. “The data belongs to you,” says Emotiv’s Le. “You have to explicitly allow a copy of it to be shared anonymously with your employer.” If a group is too small for real anonymity, Le says, the system will not share that data with employers. She also predicts that the device will be used only if workers opt in, perhaps as part of an employee wellness program that offers discounts on medical insurance in return for using the MN8 system regularly. However, workers may still be worried that employers will somehow use the data against them. Karen Rommelfanger, founder of the Institute of Neuroethics, shares that concern. “I think there is significant interest from employers” in using such technologies, she says. “I don’t know if there’s significant interest from employees.” Both she and Georgetown’s Giordano doubt that such tools will become commonplace anytime soon. “I think there will be pushback” from employees on issues such as privacy and worker rights, says Giordano. Even if the technology providers and the companies that deploy the technology take a responsible approach, he expects questions to be raised about who owns the brain data and how it’s used. “Perceived threats must be addressed early and explicitly,” he says. Giordano says he expects workers in the United States and other western countries to object to routine brain scanning. In China, he says, workers have reportedly been more receptive to experiments with such technologies. He also believes that brain-monitoring devices will really take off first in industrial settings, where a momentary lack of attention can lead to accidents that injure workers and hurt a company’s bottom line. “It will probably work very well under some rubric of occupational safety,” Giordano says. It’s easy to imagine such devices being used by companies involved in trucking, construction, warehouse operations, and the like. Indeed, at least one such product, an EEG headband that measures fatigue, is already on the market for truck drivers and miners. Giordano says that using brain-tracking devices for safety and wellness programs could be a slippery slope in any workplace setting. Even if a company focuses initially on workers’ well-being, it may soon find other uses for the metrics of productivity and performance that devices like the MN8 provide. “Metrics are meaningless unless those metrics are standardized, and then they very quickly become comparative,” he says. Rommelfanger adds that no one can foresee how workplace neurotech will play out. “I think most companies creating neurotechnology aren’t prepared for the society that they’re creating,” she says. “They don’t know the possibilities yet.” This article appears in the December 2022 print issue.
  • Why Your Organization Should Join the IEEE Standards Association
    Nov 18, 2022 11:00 AM PST
    The global business landscape is constantly evolving. Digital transformation— compounded by the challenges of globalization, supply-chain stability, demographic shifts, and climate change—is pressuring companies and government agencies to innovate and safely deploy sustainable technologies. As digital transformation continues, the pervasive growth of technology increasingly intersects with industry, government, and societal interests. Companies and organizations need access to technologies that can enhance efficiencies, productivity, and competitive advantage. Governments seek influence over emerging technologies to preserve economic interests, advance global trade, and protect their citizens. Consumers are demanding more transparency regarding organizational motives, practices, and processes. For those and other reasons, new types of stakeholders are seeking a voice in the technology standardization process. How organizations benefit from developing standards The need is evidenced in the membership gains at the IEEE Standards Association. IEEE SA membership for organizations, also known as entity membership, has increased by more than 150 percent in the past six years. Academic institutions, government agencies, and other types of organizations now account for more than 30 percent of the member base. Entity membership offers the ability to help shape technology development and ensure your organization’s interests are represented in the standards development process. Other benefits include balloting privileges, leadership eligibility, and networking opportunities. IEEE SA welcomes different types of organizations because they bring varied perspectives and they voice concerns that need to be addressed during the standards development process. Engaging diverse viewpoints from companies of all sizes and types also helps to identify and address changing market needs. From a geographic standpoint, IEEE SA welcomes participation from all regions of the world. Diverse perspectives and contributions to the development cycle enable innovation to be shared and realized by all stakeholders. Programs on blockchain, IoT, and other emerging technology IEEE SA has introduced new industry-engagement programs such as open-source and industry-alliance offerings designed to speed innovation and adoption. In addition, industry participants have access to the full IEEE SA ecosystem of programs and services including technology incubation, pre-standardization work, standards development, and conformity assessment activities. Training and marketing tools support working groups at every stage of the process. An increasing number of new standards projects from emerging technology areas have created a more robust and diversified portfolio of work. The technologies include artificial intelligence and machine learning, blockchain and distributed ledger technologies, quantum computing, cloud computing, the Internet of Things, smart cities, smart factories and online gaming. There is also more participation from the health care, automotive, and financial services sectors. IEEE SA has grown and evolved its programs to address market needs, but its purpose has not changed. The organization is focused on empowering innovators to raise the world’s standards for the benefit of humanity. Those innovators might be individuals or organizations looking to make a difference in the world, but it can be accomplished only when we all work together. Learn more about IEEE SA membership for organizations and how your organization can play a key role in advancing future technologies.
  • Andrew Ng: Unbiggen AI
    Feb 09, 2022 07:31 AM PST
    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”
  • How AI Will Change Chip Design
    Feb 08, 2022 06:00 AM PST
    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it's not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It's a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It's very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
  • Atomically Thin Materials Significantly Shrink Qubits
    Feb 07, 2022 08:12 AM PST
    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

Engineering on Twitter