BLISS Lab

Personal History

A Personal Technical Narrative

Updated March, 2024

Overview – I am wonderfully lucky.  I have been able to work with amazing, brilliant people and develop the required funding to pursue a wide range of truly fascinating research and development topics.  While I know that I will run out of time before I get to try all the ideas that I want to explore, I have been able to pursue more than most get a chance to try.  A partial list of things on which I have been able to contribute: superconducting magnet design for particle accelerators; thrust vector control and power control for rockets; particle physics, experimental and lattice gauge theory; novel, robust antenna array processing for communications and radar; novel multiple-input multiple-output (MIMO) communications approaches; foundational work in in-band full-duplex signaling; foundational work in MIMO radar approaches; foundational work in advanced electronic protection (EP) for radar systems; foundational work in RF convergence (also known as integrated sensing and communications or spectrum sharing); foundational work in advanced phase-accurate MIMO positioning, navigation, and timing (PNT) techniques; foundational work in novel physiological radar systems; and revolutionary domain-specific system-on-chips (SoCs).  In all these areas, I have been able to go from basic information and estimation theory, to algorithms, to implementation approaches, to phenomenology, to system demonstrations.  I have had the opportunity to build wonderful teams, which is crucial in pursuing these topics, or frankly any serious research.

Some Details – I will avoid false modesty; things have gone well.  At the end of calendar year 2012, when I started as faculty at Arizona State University (ASU), it was completely unclear if this personal experiment in academia was going to succeed.  It was, frankly, very stressful because I had upended my life and my family’s life to do it.  I am particularly thankful to my family for letting me introduce so much chaos into their lives.  As of 2023, I have graduated 14 ASU PhD students [bliss.asu.edu/people/], and have well over 13,000 citations [scholar.google.com/citations?user=scNTosQAAAAJ].  I have built a wonderful team of students, staff, researchers, and collaborators.  In 2023 alone, I have received 5 U.S. patents.  For the last several years, as principal investigator (PI), I have credit for several million dollars per year in research expenditures (over $7 M per year in 2023).  At the moment (2023), I have, as PI, over $40 million in research programs.  I have started two companies as spinouts of our academic efforts: DASH Tech Integrated Circuits and Big Little Sensor Company.  Overall, I would say things have worked out ok.

Given that I have been PI on dozens of projects and technical lead on many more.  I will not attempt to discuss everything.  Furthermore, some of the projects were sensitive from either national security or commercial perspectives, so I have intentionally left them out, but for your amusement, what follows is a nearly chronological discussion of some of my personal history and projects.

Undergraduate Degree – I graduated from Arizona State University in 1989 with a Bachelor of Science in electrical engineering.  My class work was relatively general, but I assumed that I would work on analog design when I graduated.  During my time at ASU, I also worked about half-time at Motorola Government Electronics Group (GEG).  I worked on a range of software and hardware projects that were related to advanced communications systems.  Given my recent research efforts, it is somewhat amusing that my first serious project at Motorola GEG was writing software simulation program for systolic array processors.  In Turbo Pascal, I built a program that modeled the computations with an interface that looked like a spread sheet but was customized to perform the required functionality.  While now there are many resources to aid in producing user interfaces, I had to basically do everything – including all aspects of the user interface – from scratch.  I still have fond memories of Turbo Pascal.

Rockets – After graduating, I moved to San Diego to work for General Dynamics Space Systems.  I worked on avionics design of the thrust vector control and the power control of the upper stage of the Atlas-Centaur rocket.   The thrust vector control board produced the signal to control the actuators that steered the engines.  The power control unit was basically a complicated DC-to-DC converter box.  My job was to update prior designs to address new requirements and changes in available parts.  Because all parts were required to be rated for space (s-level), part selection was limited.  On the thrust vector control board, I had to work with a prior design of an operational amplifier built out of discrete parts rather than an integrated circuit.  It was an interesting experience.  Every engineer should have the opportunity to see their designs vacuum, thermal, and vibration tested to destruction.  It changes your view of the world.  

For the launches some of the design engineers would go see them in person, but most of us would be working on the next design.  Consequently, we would go to this large room with a huge screen for launches, so that we could watch the launch without having to travel.  For one of a huge number of reasons, launches are often scrubbed.  However, when they actually launch, and you know that your designs are controlling that monster, it is quite an emotional ride.  Because humans are humans, people usually sat with people from their groups.  As the launch countdown got close, you could feel the tension rise.  When the engines were lit, you could hear a murmur around the room.  Modern rockets are held to the launch pad until it is clear that all the engines are operational.  Then, lift off, followed by cautious cheers.  A moment later, the rocket clears the tower, and the caution cheers turn into loud claps and exclamations.  What follows is an amusing localized set of cheers as the rocket passes through various critical stages.  The dynamicists cheer as the rocket passes through the point of maximum mechanical stress.  The engineers associated with the first stage cheer when the main engine shuts off (because it is no longer their problem).  Upper stage separation is next.  The start of the upper stage engines was a scary point.  We had a couple failures caused by faulty values letting in humid Florida air into system at launch.  Anyway, everyone got a chance to cheer about their contributions.  Well, everyone except range safety.  They were responsible for blowing the rocket up if something went wrong.

At GD, as an extension to my thrust vector control design responsibilities, I also had a chance to work on various fault tolerant designs.  We developed a triple-modular analog-design control circuit that would naturally ignore part failures.  I also invented an extension that allow for dual sequential faults.  Because we thought these inventions were valuable, we worked with the GD lawyers to file a patent.  The process does not happen quickly, so when you don’t hear anything for 6 months, you are not surprised, but I did follow up with a couple phone calls, but no one picked up.  Well, the lawyers sat in the building next to mine, so I thought I would just stop by and see if I could find someone.  When I walked onto the floor where the lawyers sat, it looked like a movie set after a startup had failed.  Nearly all the furniture had been removed with lots of wires hanging from the walls.  So, clearly, something was up.  I did not know at the time, but GD had started selling off pieces even though the CEO said that he was not going to do that.  When I started at GD it had about 100,000 employees.  I was told soon after I left, it was down to 20,000.  Anyway, we never finished filing the patents.  

Superconducting Magnet Design – During my time at GD, I had assumed that I would go back to school for a graduate degree at some point.  I took an advanced engineering mathematics class at the University of California, San Diego Extension at night, just to keep sharp.  I also was re-reading my electromagnetics texts.  I had Maxwell’s Equations posted over my desk.  I thought at one point, if I ever got a tattoo, it would be of Maxwell’s Equations.  My team lead asked about the equations, and I expressed my fondness for electromagnetics.  He said that GD had a group working on superconducting magnets, and they were looking for people.  I soon started working on cross-section design optimization of the superconducting dipole magnet for the Superconducting Super Collider (SSC), which was a huge particle physics experiment that GD was helping to build.  The design of large-scale superconducting magnets is remarkably rich.  One reason it made sense for GD Space Systems to be involved was that the Centaur (upper-stage rocket) used liquid hydrogen and oxygen, so GD was experienced in working with large-scale cryogenic systems.  The magnet’s job was to control a vertical magnetic field within the beam to be as perfectly vertical as possible [H. Gurol, et al. “Magnetic performance of the SSC dipole magnets,” IEEE transactions on applied superconductivity, 1993].  As the beam energy of the particle accelerator ramped up, the field would vary from a fraction of a Tesla to about 6 Tesla.  There were a huge number of technical hurdles to overcome.  The magnetic field outside the main magnet cross section had to be carefully controlled.  To do this control, the outside of the magnet had a large iron collar.  Furthermore, the forces on the windings of the magnet were trying to get it to explode, so the magnet would try to deform.  To keep the superconducting windings in their superconducting state, the windings were immersed in liquid Helium at a temperature of about 4 Kelvin.  At peak current, about 6000 Amps ran through a ribbon that was a bit more than a mm by cm in cross section.  It was a messy nonlinear magnetic field calculation problem that changed as the current increased.  Because of my work on the magnetic optimization, I naturally became interested in the source of the requirements, so I started learning about particle accelerator physics (which should really be denoted particle accelerator engineering).  I attended multiple U.S. Particle Accelerator School programs: one at Stanford and one at Harvard.  I learned a lot and met some wonderful people.

Getting into Grad School – I decided to go back school in physics rather than electrical engineering.  I applied to a few schools, and I was rejected.  I was missing a number of classes, so to improve my chances to be admitted, I took some undergraduate classes at the University of California, San Diego.  I took a class in quantum mechanics and one in Lagrangian dynamics.  I very much enjoyed them.  As an aside, while taking these makeup classes, we had a big review on our superconducting magnet design.  My wonderful friend and colleague David Madura and I worked through the night trying to get all the analysis done.  Dave was also taking quantum with me.  At some point during the review, I could see that he was starting to drift into a half-awake state.  He was taking notes on comments from the review, and I noticed that the notes had transitioned into an incoherent rambling about quantum mechanics.  It was hard to not laugh out loud.  Anyway, when trying to decide where to apply, the joke was that I only applied to schools near the Pacific coast.  Both University of California, Santa Barbara and University of California, San Diego were options.  I was leaning toward UCSB because it actually stuck out into the Pacific, but my Lagrangian dynamics professor (Prof. Jim Branson) put in for a fellowship for me, so that tipped the scale.  I entered a direct-to-PhD program at UCSD (1993-1997), and I started working in Jim’s lab.  

Grad School – Throughout the first part of my graduate career, I always felt like I was playing catchup.  I was missing many physics major classes, like modern physics, so I had to pick up stuff as I went along.  Jokingly, I say that I learned a lot of my physics on the mean streets of La Jolla.  However, things on the research side came relatively easily.  Through high school, working part time during undergrad, and to some extent during my work at GD, I had done a lot of programming in multiple languages, so writing analysis code was relatively easy.  Also, because of my time at GD as a professional engineer, the concept of having a plan and executing problem solving was natural.  Now, as a professor, students often ask me if they should go straight through or take breaks between undergrad and grad school.  For many reasons, most people who leave do not make it back to graduate school.  If you know that you want to get a graduate degree, I recommend going straight through.  However, there are some real benefits in gaining professional experience before grad school.  There is no one right answer.

Graduate school was largely broken into two phases.  The first year of graduate school was heavy on classes followed by a qualifier.  The qual exam was a two-day written exam.  One day was focused on what we should have known from our undergraduate classes (many of which I never had) and the second day was focused on our graduate efforts.  It was very stressful for everyone, and there was a lot of dark humor.  It was particularly stressful for me because I had upended my life, taking a factor of four cut in pay to get here.  I am thankful for my many friends from grad school including Ed, Wayne, and Ian who helped me stay sane.  The expectation was about two thirds would pass, which means one-third would not.  While I was generally confident in my abilities, I admit that I was scared.  Maybe unsurprisingly, but still unusually, I did better on the graduate part of the exam than the undergraduate part on which I frankly struggled a bit.  My advisor teased me about this.  Anyway, I passed, thankfully.

In my nascent research, I was working on particle physics.  It is really a wonderful aspect of physics.  You are asking the most fundamental questions.  What are the basic rules of the universe?  Jim was working with CERN which operates a large particle accelerator and number of detectors on the border of France and Switzerland.  For reasons that were more personal than professional (and yes, it was because of a very sweet and intelligent grad student at Cornell), I decided to switch to the CLEO collaboration that used the CESR accelerator at Cornell University.  I started working with Prof. Hans Paar.  While still a UCSD student, I my desk was down in the guts of Wilson Synchrotron Laboratory at Cornell in Ithaca, New York.  When I moved there, I assumed that the papers would soon lede with “Stupid Californian Dies in Snow,” because I had never lived where it snowed.  Somehow, I survived, and it was not that bad.  I took a few classes at Cornell and did a lot of research.

Wilson Lab was full of researchers for all over.  There were, of course, many Cornell graduate students and researchers, but many universities had people stationed there: UCSB, Cal Tech, Syracuse University, McGill, and many more.  UCSD had a large office that was not far from the CLEO detector.  While I was there, I worked with a number of wonderful students and researchers.  In particular, UCSD postdocs Dr. Michael Sivertz and Dr. Soeren Prell were always there to help me.  Because my advisor, Hans, was still in San Diego, and we did not have good teleconferencing technology, we would have periodic phone calls, but my main interactions were when he visited about once a quarter.  This was followed by marathon meetings going late into the night during which I would present a two-inch thick stack of plots (with 4 plots per page), and listening to him ask over and over and over again, why does it do that?  Everything was tripled check four different ways.  It was a brilliant time, and Hans was a wonderful advisor.  He pushed me to think clearly and be suspicious of every result.  Sadly, his wife contracted terminal cancer.  Consequently, for about a year, he was not able to visit Cornell.  I was largely left on my own to do my research.

Lattice Gauge Theory – My research at Cornell was broken into two pieces.  My main thrust was experimental particle physics, but I also worked on theoretical lattice gauge theory.  This was not typically done.  What seemed like the most natural thing in the world, to be vertically integrated in your research really rubbed a lot of people the wrong way.  I must thank Cornell Professor, Peter Lepage, who let me work with him.  He is a brilliant, creative physicist, who, for reasons that only he could understand, let some crazy UCSD experimentalist work with him.  It was great fun, but admittedly was right at the edge of my abilities.  I have to thank Hans, too, because he caught some grief for letting me do it.  Anyway, one of the technical issues of quantum chromodynamics (how the strong force works) is that calculations are nonperturbative, so you must be creative.  Lattice gauge theory tries to do calculations on a space-time lattice, but the problem was that your answer would change as you changed the lattice density, so you had to figure out how to normalize your results.  Anyway, Peter was exploring ways to get there faster with lower density lattices.  I was lucky to work with him [D.W. Bliss, K. Hornbostel, G.P. Lepage, “The Deconfinement Transition on Coarse Lattices,” 1996].  He also told me one of the more useful theories on programming.  He said that when you were writing code.  You should complete it, get it working, and then delete it.  Because the second time you write it, you will build a clean version that avoids the mistakes of the first version.  Otherwise, you will just keep patching the errors.

Two-Photon Physics – On the experimental side, I worked a lot with Mike and Soeren, but I also worked a lot with Cornell graduate student Andy Foland.  It is worth noting, Andy was the best natural particle physicists that I had ever met.  Because particle physics is about the physics, the experiments, the laboratory, the collaboration politics, and creativity, you wanted to be good at all of these facets.  After graduating, Andy became a professor at Harvard, but later left to work in industry.  In my work, I was extending some of the research in the UCSD laboratory.  We had a history of working what is called two-photon physics.  As you know, if you cross the beams of two flashlights, the light beams do not bounce off each other.  Light does not seem to directly interact.  Except you are wrong, but only barely.  If you increase the energy of the photons by about a billion times then there is some reasonable probability of the photons, for an instant, changing into a particle anti-particle pairs which does let another photon bounce off them.  Because the “bouncing” of the light only works for charged particles, it provides an interesting tool for exploring particles.  My first project was on the production of Lambda anti-Lambda pairs.  This was an attempt to help understand some QCD interaction.  The second big topic came out of a conversation with Andy.  Unlike photons, which do not directly couple, gluons (which mediate the strong force) do directly couple.  Consequently, in theory, there is the possibility of a bound state of just gluons.  It would be like an atom, which is bound by photons being made of only photons.  This theoretical object was, amusingly, called a glueball.  Because the QCD was so difficult to calculate, we did not know if they existed.  It was strongly expected that at lower energies they existed but in this weird superposition state (mixing) in which they were a bit glueball and a bit mesons (quark anti-quark pairs) with some probability.  However, there was this extremely short-lived “particle” identified by the accelerator BES entitled the f_J(2230).  Glueballs have no electric charge, so there should be an extremely small production rate for the photon-photon interaction around the energy of the f_J(2230).  We could compare the direct production rate to the photon-photon production rate in a parameter sometimes called the stickiness.  Yeah, particle physicists are overly fond of silly word play and games in their papers (look up penguin decay, for example).  Anyway, Andy and I launched into an analysis.  This was an anti-search of sorts.

Andy and I developed an analysis plan.  We would identify two-photon interactions and look for decay paths that should be involved if the f_J(2230) particle (resonance, if you like) should decay into, if it was made from charged particles.  We studied our expected efficiency and background processes.  We simulated our expected performance and look at off-resonance data.  It took a few months to set everything up.  Once we began moving along on our plan and looking at the real data, to try to minimize bias in the development of the analysis, we split our team into two.  It would take weeks to run through all the tapes that stored years of data.  Andy would look at some of our intermediate results to make sure that we were not off in the weeks, but I had to finalize all criteria and the code without looking at the results.  I would not recommend playing poker with Andy.  I could not read him during our intermediate checks.  Once we ran through all the data.  We, with some small sense of ceremony, “opened the box” and looked at the results.  The production rate was consistent with the f_J(2230) being a nearly pure bound state of gluons [R. Godang, et al., “Limit on the Two-Photon Production of the Glueball Candidate f_ {J}(2220) at the Cornell Electron Storage Ring,” Physical Review Letters, 1997].  It is worth noting that the estimate mass moved from 2220 MeV to 2230 MeV over time, and that in our particle physics collaboration, the first author was randomized, but I was the author.  This was kind of a big deal, like an entirely new state of matter.  The problem was that we were dependent upon the measurement made by the BES collaboration.  We could not conveniently measure the direct electron-positron production of the f_J(2230), ourselves.  Still, it was pretty cool.  I had science reporters contact me, including one, who just cold called me.  It was exciting.  However… over time additional measurements of the f_J(2230) were made, and the measured characteristics changed.  Either the original BES measurement was really lucky (unlucky for us) or they accidentally tuned the processing chain to overestimate production and lifetime, which is dangerously easy to do.  In any case, over the next few years the measured stickiness fell.  The specialness of the particle decayed, as it were.  While it is a little sad, personally.  I thought I was there at the discovery of a new state of matter, but in science measurements are redone and redone and studied further.  We now expect that there is some glueball component for the particle, but it is mixed with mesons.  In any case, the particle I was studying was not so special.

Graduation – As I was approaching graduation, my advisor (Hans Paar) suggested that I defend the Summer of 1997, at least that is the way I heard it.  I started writing my dissertation.  During one if his visits, he asked how it was going.  I said that I was organizing it, but I had time.  He mentioned that it was due at the end of April.  This was followed by a long pause.  Um, that is in 6 weeks.  In my memory, the room started spinning, but I am sure that is not true, but I will stick with that part of the story.  In any case, I made a writing plan of what I had to get done each week.  The plan hung over my desk, and I worked long hours every day to get there.  I drank way too many 2-liters bottles of Dr. Pepper during those weeks.  Somehow, I pulled it all together, although I will not vouch for the quality of the writing.  Over the next year, I found out that I had two cavities – the only two I have ever had in my life – which I attribute to my dissertation.  Anyway, Hans, had the habit of betting people for ice cream.  I have always had the sneaking suspicion that he had bet another prof that I would be the first one in the class to defend. 

Anyone who has gone through the process of scheduling a defense will know the difficulty.  The committee rules for the UCSD physics department were complicated, too.  I had to have people from inside the department, near the department, and from other colleges, which added up to 6 people.  Additionally, because I was working with Prof. Peter Lepage (a legit legend), he graciously offered to sit on my committee, too.  Because of department politics that I never really understood, it was all a bit more complicated than it really needed to be.  There was a prof who wanted to get on my committee who I did not want on it because of personalities.  Because I had both particle physics and some lattice gauge theory in my dissertation, a UCSD prof who did lattice gauge theory research wanted to be on the dissertation.  It turns out that both this lattice gauge theory prof and a particle physics prof thought it was important that they sat at the defense.  They were both traveling back and forth to Europe and their schedules had no overlap.  They missed by three days.  Hans said that I should schedule two defenses on the last and first days that each were on campus.  He said this was silly, and someone would give in and sign-off based upon reading and a short discussion.  He was wrong.  So, I had two PhD defenses.  Because they each had their own interests and did not want to hear about the other topic, the two defenses were on two different topics: one on particle physics and one on lattice gauge theory.  It was a long few days.  At the end I passed, but I sadly only got one degree.

Post-Graduation – After graduation, I worked for Hans as a postdoc for a couple months.  I was trying to clean up some of my research.  I had multiple people talking to me about particle physics postdoctoral opportunities.  Unfortunately, with the defunding of the Superconducting Super Collider, the particle physics faculty job market was bleak.  We looked and counted five faculty positions across the US.  While I loved particle physics, it seemed like a good time to try something else.  My girlfriend at the time (who is a kind and brilliant person) got a job in Boston.  While Boston was not high list on places to live, it is an interesting city with some cool aspects.  Anyway, I was looking for jobs in the area.  It was a mildly confusing time.  I had no idea who would want to employ me.  I tried a few different paths.  

Getting an Interview at MIT Lincoln – I talked to one of my friends (Bill Brower) who had graduated from UCSD a couple years before me.  He was working for MIT Lincoln Laboratory.  He said that his group was not looking for anyone, but I should talk to another UCSD physics grad (Steve Crooks) that was in a different group.  It turns out that Steve happened to be the TA for my electromagnetics class, and he had happened to remember me because I had done well.  He took my resume to his Associate Group Leader, Randy Avent (who later was the founding Florida Polytechnic University president).  For reference, Lincoln groups typically have 50 to 100 employees with nearly half having PhDs.  Anyway, apparently, Randy told Steve that the group did not need any more physicists.  They had recently hired a string theorist, Ali Yegulalp (who is really brilliant).  As an amusing aside, both Ali and Steve ended up working in the financial sector.  Anyway, I called Steve, and he told me about Randy’s response.  I said that I understood.  I did not know why they would want to hire me either.  I asked what I could learn or do so that I would be of interest.  It gets a little funny at this point.  Steve suddenly got indignant.  He said that he did not know.  He said that he would call me back.  Steve walk right into the Group Leader’s office and told Irv Stiglitz that he should invite me out for an interview.  Now, the world is divided into two groups: those who have not met Irv, and those who have.  Irv is one of those larger-than-life individuals.  He also only spoke at one volume, turned to 11.  You could hear when he was within a quarter mile.  Apparently, Steve’s request worked.  However, in the group office, out in front of everyone after being told my research topic, he bellowed glueballs, I don’t give a rat’s ass about glueballs.  It turns out “rat’s ass” is a standard point of reference when talking to Irv.  He also announced to Steve with regards to me, “he better be good.”  I came out, gave a job talk, partly about glueballs, and had an offer within two weeks in 1997.  As I tell students, these employment things are often not about you.  My timing happened to be good.  The group had just got a couple new projects.  I try to remind students, to not take hiring personally.  I got lucky this time.

My Time at Lincoln – MIT Lincoln Laboratory is a fascinating place.  I joke that it is mostly an idea.  When you work there, you are an MIT employee, the facilities are on Air Force land, most of the building are leased to Lincoln.  The funding comes from projects.  Lincoln is a federally funded research and development center (FFRDC).  As an FFRDC, it receives some congressional line funding, but it is not a significant amount.  The line funding is a few tens of millions of dollars, but the research expenditures are now about a billion dollars.  Nearly all funding comes from Lincoln directly getting on DoD programs.  Effectively all funding comes through a single Air Force contract (which makes contracting interesting and often easy).  I regularly worked with people from campus (MIT), a lot of bright people.  However, I have to say, that the research engineering people at Lincoln are simply some of the brightest people on the planet.  Lincoln is very project oriented.  You have to make real things.  You could see that the faculty, as bright as they are, often did not see, or maybe care about, the driving applications or system needs.  In any case, I had the opportunity to work with many wonderful people at Lincoln and MIT.  Because there are so many bright people at Lincoln, it could become a bit insular.  You often did not need to talk to people outside Lincoln because world experts were down the hall.  There is no way I can relay all the interesting stories in this supposedly brief narrative.  

During my time at Lincoln, I had the opportunity to work with many brilliant people.  Importantly, I worked with Keith Forsythe – a mathematician who knew more about engineering than nearly any engineer that I have known.  I think that he is the brightest person that I ever met.  Before I came to Lincoln, Keith had worked a lot with Ed Kelly at Lincoln, who performed much of the foundational work in detection and estimation theory associated with antenna arrays.  Because we had worked together for a while, Keith and I could have deep technical conversations within just a couple minutes.  I really appreciated the efficiency of those conversations.  At one point, I was working on some technical topic, and someone said that you need to talk to another person.  This other person was super nice, and I really liked him, but I thought I was going to lose my mind.  For over an hour we talked (ok, he talked) through an idea that was clear within 30 seconds.  I never went into his office again for a technical discussion.  Another person at Lincoln that I deeply appreciated was David Goldfein.  David is a living encyclopedia of systems, bounds, and algorithms.  He had produced code that would automatically build hundreds of slides, filled with plots, based on various system parameters and approaches.  It was hilarious.  Early on, when we were still using projectors and transparencies, I would see him come into sponsor briefings with multiple big three-ring binders full of slides.  Based upon the interest of the sponsor, he could just build a talk with results in real time.  It was a thing to behold.

When I started working at Lincoln, Irv Stiglitz was the group leader, so I worked for him, although other people often led projects; consequently, I would often work more directly with them.  My first project was run by Paul Monticciolo, the DARPA Novel Antenna Program.  In my entire career, the only person that could make me nervous was Irv.  Over time I got used to him, but in the beginning, he was a lot.  He was famous for ripping apart the talks of new people in the group, partly to see how the reacted.  In dry runs, he would tear your talks and ideas apart, at full volume.  At one point, he bellowed at me that my talk was unintelligible, and he asserted that my presentation was not clear, and it was because I could not think clearly.  I am not sure, but I am certain that “rat’s ass” was used as a part of those discussions.  He came from a culture that trying to tear ideas down was a key part of making sure that your ideas were good.  I still appreciate the intent, if not the volume.  At the dry run for my first highly visible talk, I think he made the mistake of getting technically interested.  While he asked questions on every slide, he did not really come after me.  We got to the end, and I could almost see something click behind his eyes; he said to go back to slide so and so.  Ok, I went back to the slide, and he said that the slide did not make any sense.  I sheepishly responded that it was Paul’s slide.  He bellowed that when you make a slide deck, you need to make it your own.  In fairness to Paul, I probably did not do the slide justice in my discussion, but I was still amused.

Novel Antenna Program – As my first real program at Lincoln, the Novel Antenna Program (which was led by Paul) had a huge influence on my thinking and provided an intellectual bases for much of my thinking to this day.  In retrospect, in 1997, we were running an early experimental massive multiple-input multiple-output (MIMO) system, although that phrase was not in use at the time.  We used a 16-antenna system to disentangle a bunch of experimental radio signals that were designed to mimic 3G phone waveforms.  One of the team members build from scratch what we now call software defined radios.  We would put them in our cars and drive around Cambridge and Boston while we had our 16-antenna receiver on top of a parking garage on the MIT campus.  Everything was synced to GPS, so all the distributed transmitters would send bursts simultaneously, and the receiver would receive them.  As an amusing aside, one of the people who helped on the experiments was a consultant, Tony Tether.  He was responsible for the rather unglamorous job of pushing a cart with a radio up and down a sidewalk.  Not long after this time, he became the DARPA director.  At Lincoln, we worked on a wide range of algorithms for disentangling the multiple signals.  We also worked on advanced angle-of-arrival and time-difference-of-arrival techniques.  It was a huge opportunity for me.  I would talk to Keith about various ideas, and I wrote thousands upon thousands of lines of code to implement many techniques.  I was in a technical candy store.  Working with Keith, I wrote a space-time multiple-antenna multiuser detection receiver, which we called a multichannel multiuser detector (MCMUD).  In the code, I gave it the title the Illudium-Q36-space-time-demodulator.  Anyway, much of our work was never published.  At the time, Lincoln – at least my group – had little interest in publishing.  I once had a sponsor tell me that it never makes sense to publish.  If it is not interesting, then it is not worth publishing.  If it is interesting, then you should not publish because it is valuable.  I suppose there is some truth in it.  Nonetheless, we wrote a few papers [K. W. Forsythe, D. W. Bliss, C. M. Keller, “Multichannel adaptive beamforming and interference mitigation in multiuser CDMA systems,” Asilomar Conference on Signals, Systems, and Computers, 1999.; D. W. Bliss, K. W. Forsythe, “Angle of arrival estimation in the presence of multiple access interference for CDMA cellular phone systems,” IEEE Sensor Array and Multichannel Signal Processing Workshop, 2000.] and received a patent for MCMUD [US Patent 6,745,050], although it always took a few years to get around to it.

Maybe a Startup – Based upon the MCMUD patent Keith and I had, there was an opportunity to significantly improve the capability and robustness of the cellular base stations with the algorithms that we had developed and demonstrated.  What followed was some of the strangest meetings that I have ever been in.  To be clear, this was just before the 2002-2003 market crash.  Some venture capitalists saw our invention disclosure and wanted to talk to us.  What followed was multiple meetings at which we sat around the table.  What quickly became clear was that there were four different interests sitting around the table: MIT which just wanted to license the tech and were happy to sell us with it; Lincoln who just didn’t want to lose any people; the venture capitalists (VCs) who wanted to figure out how the leverage this tech; and us (the tech folks connected to it).  At some point the VCs invited me to a breakfast meeting.  I felt like I was in some sort of scene from a movie.  They suggested that I start a company and they could bring about $30M to build a first version.  That was back when a million dollars meant something.  Anyway, I was interested, but I was concerned, though.  I was aware that there were really three big base station companies, and you needed an “in” to get your technology integrated into their system.  Even if you have a better technology, this working with the companies is difficult to do.  Not all problems are technical.  They had no answer for my concerns, so I pulled back.  Frankly, I was thinking about pursuing a professorship, so that was part of it.  Given that the market crash that soon followed, I suspect that I got lucky.

MIMO Radar – At the end of 1999 and the start of 2000, I started to transition to working on an advanced radar program.  It involved advanced radar techniques, systems, EW, EP, … To be honest, my head was still in the communications side of things.  I got some internal funding to explore experimental synthetic aperture geolocation and space-time-frequency extensions to MCMUD for multiple-input multiple-output (MIMO) communications.  My goal was to enable distributed coherence, which I had pitched to DARPA in May of 2000.  I still see some of those graphics show up in DARPA slides.  However, toward the end of 1999 and start of 2000, Gerald Titi – who goes by Ti – encouraged me to do more radar work.  I will say Ti was the most amazingly detailed system engineer that I have ever worked with.  I have often joked that when working on a system, he would know every detail down to the thread count of the screws holding the box together.  I am only half joking.  Anyway, during this time I had the opportunity to learn about synthetic aperture radars (SARs) and ground-moving-target-indicator (GMTI) radars.  I invented some novel radar EP techniques.  Maybe because I was working on MIMO communications on the side, during 2000 – once again in collaboration with Keith Forsythe – I developed the idea of coherent-target-response MIMO radar.  I actually developed multiple variants of it, but never got around to publishing them.  One waveform technique that we developed was Doppler-division multiple access (DDMA) for MIMO radar, which was an idea that I lifted from some sightly different work of the amazingly sharp Pat Bidigare.  Anyway, as was often true, it took us three years to finally write a paper [D. W. Bliss, K. W. Forsythe, “Multiple-input multiple-output (MIMO) radar and imaging: degrees of freedom and resolution,” Asilomar Signals, Systems and Computers, 2003.].  There are several versions of MIMO radar (over-the-horizon HF MIMO radar, statistical MIMO radar, for example with a range of waveforms), and other people develop similar ideas independently, but this is often true in science and technology development.  I later realized that some of these ideas were previously developed for the RIAS project in 1984.  In any case, as far as I am aware we were the first to invent GMTI MIMO radar, which operates in a particularly challenging and is interesting for MIMO radar because of the extreme clutter mitigation requirements.  Anyway, Joe Guerci (who is a brilliant engineer, was an early contributor to space-time processing for GMTI, and wrote a book on MIMO radar) said that I invented MIMO radar (at least in this context), so I will stick with that.  Over the years, we wrote numerous papers on MIMO radar.  I now see MIMO radars everywhere, and I don’t get a penny from it.  Such is life, I guess.  Still, it is nice to know that your technical contributions have broader impact.  

A few years after publishing our MIMO radar paper, we developed the first (to my knowledge) demonstration of a MIMO GMTI system.  Amusingly, I went to talk to the brilliant and thoughtful Jim Ward – who was my boss and then moved into the division office – about getting funding for a distributed coherent radio testbed, and he said that’s great, but how about a MIMO GMTI experimental system instead.  I had to wait on the radio testbed.  At this time, Lincoln’s CTO was Zach Lemnios, who became the Assistant Secretary of Defense for Research and Engineering soon after this project.  Zach was a great supporter of our work, provided internal funding, and pushed us to demonstrate it all.  I wrote the full processing chain in matlab.  At some point, I handed the experimental leadership of the project over to Shakti Davis, who had to deal with all the experimental challenges (for which I will always be thankful), and there were many.  Eventually, we able to cobble together a system and integrated it onto a Twin Otter airplane.  We had mounted an array of cheap patch antennas onto what we called the surfboard and mounted the surfboard to the Twin Otter.  Because this was an experimental setup, we had to observe experimental aircraft rules.  Because our funding was limited, we could only afford two flight tests.  With a bunch of help of people from Paul Monticello’s group, we setup an experimental scenario of moving vehicles and people at a local military base.  On the first day, we had a few scripts for driving and walking patterns, the Twin Otter flew over and we drove our scripted patterns as the plane flew in for several passes.  The next day and over the weekend, we analyzed the data.  We had problems with the data recorder and with the GPS receiver, so it was all bad.  On our second day, a couple weeks later, low clouds hung darkly in the sky.  The plane could not take off because of the experimental aircraft limitations.  We sat there for hours.  It was starting to look like it was not going to happen.  The cloud ceiling had to get to 10,000 feet.  Then, over the radio, we heard that the pilot had taken off.  To be honest, I have no idea what a 10,000-foot cloud ceiling looks like, but I have always thought that the pilot was a little creative in determining that we had got there.  Anyway, we ran to our positions.  I drove my Mini Cooper around the track.  We got back and looked at the data.  It is always a run, modify, run, modify loop, but after another weekend of effort it was working as expected [J. M. Kantor, D. W. Bliss, “Clutter covariance matrices for GMTI MIMO radar,” Asilomar conference on signals, systems, and computers, 2010.].  I did not actually publish my detection performance results, but I did brief them many times.  I showed that the MIMO radar could detect targets that the standard MIMO radar missed.  Also, I worked with the excellent Josh Kantor who continued to refine code and work with the data. 

Communications Approaches and Radios – Over the next several years, for multiple government sponsors, we develop radio systems that leveraged multiple-antenna SDR architectures for robust communications.  I worked first for Jim Ward and then Gary Hatke, to whom I owe much for what he taught me about systems, about leading programs, and for letting me chase funding for my ideas.  We did everything from extending basic theory to building systems in target packaging.  There are too many people to mention about all our efforts in this area, but Tim Hancock and later Adam Margetts were key to our efforts.  There are many interesting stories during this time while working on these communications systems.  Admittedly, some of the stories, I am not allowed to discuss.  However, there is one story that I always tell students because it is important for them to understand how real systems work.  It’s related to the idea that nothing works until you test it and fix the problems that are hiding from you.  We (with Tim) had built this really cool custom, compact, robust SDR MIMO communications system.  When you build these sorts of systems, you develop in spirals: develop concepts, develop simulations, develop a lab bread board system, develop a preliminary hardware system, and then you develop a target system.  During each spiral you find and fix problems.  We had got to the last spiral.  The program manager was going to visit about a week out.  One side of the link was constructed from three small boards that folded up into a small final package.  With the trifold open, we did test after test finding and fixing problems.  Then, it got to the point that was working reasonably well, if not perfectly.  We folded it up, and it worked pretty much the same.  Well, it should.  We put the lid on the package, and the radio failed.  That should not happen.  It did not fail every time, but it was unreliable.  Ok, it must be a mechanical short because of the lid, right?  After several hours of investigation and testing, nope.  It was not a short.  Over the next few days, 16-hour days, we tried so many tests to try to determine what was going on.  Then, late at night – I think it was after midnight – one of the younger engineers tried just sitting an old CD jewel case on top. The link failed.  This, of course, was obviously impossible, but was somewhat repeatable.  After staring at the situation, we (and, it might have been Tim) realized that the frequency synthesizer was near the lid.  We realized that the extra capacitance of the lid or jewel case was loading the input of the voltage-controlled oscillator (VCO which are notoriously sketchy devices) which then cause the settling time of the entire frequency synthesizer to increase well out of spec.  The resulting frequency error was causing the communications acquisition and synchronization to fail.  We considered a few solutions, but the easy answer was just to give the radio system a lot more time to settle (well beyond that of the synthesizer spec) before starting operation.  There are a few things to take from this.  Firstly, if you are not building systems, you will not be exposed to many real problems.  Secondly, you must test everything because nearly everything is broken until you fix it.  Finally, when a student comes to me and tells me that the spec sheet for some part says that it will work, I do a face palm.  I tell them that spec sheets always lie.  You must test it.

Simultaneous Transmit and Receive (In-Band Full Duplex) – At some point, Ti had mention that general simultaneous transmit and receive had never really worked for radar.  To be clear FMCW receivers that employ stretch processing do it all the time, but the signal is effectively shifted in frequency because a delayed chirp at the receiver creates a frequency shift.  In any case, in about 2006, I became interested in how this would work for communications systems. The basic problem is that the received signal is typically at least 100 dB stronger (more than a factor of 10 billion) than the received signal.  You want to remove this signal so that the residual is down by that much so you could see the external received signal.  In theory, this is easy because you know what you sent.  In practice, there are lots of things that distort your signal.  I knew to get decent self-interference mitigation performance, you would want to develop an approach that employed a series of techniques.  Importantly, each successive mitigation approach could not inhibit later techniques.  This is more difficult than you might think.  Anyway, I developed a technique that employed space-time spatial transmit beamforming with a matched space-time receive beamforming to minimize energy coming out of the receiver while maximizing the external signal of interest.  Simultaneously, I used temporal mitigation in digital processing to further reduce interference.  I worked with an extremely capable Lincoln Scholar student, Peter Parker, who was a Harvard PhD student.  We used our experimental SDR system to demonstrate and compare techniques [D. W. Bliss, P. A. Parker, A. R. Margetts, “Simultaneous Transmission and Reception for Improved Wireless Network Performance,” Statistical Signal Processing, 2007.].  It worked reasonably well, but it is a difficult thing to do.  I still have active programs on the topic.

Advanced Processors for SDRs (The First Try) – Because of my work on these advanced SDR systems, I came to the conclusion that the processors that we were using did not really solve our problems.  The scalar processors, traditional CPUs, were not sufficiently efficient, and you could not really build a new application-specific integrated circuit (which is about 100 times more efficient) every time you wanted a new radio system.  We typically used field-programable gate arrays (FPGAs), but these had their own problems of not being particularly efficient, being difficult to program, and not having real time flexibility.  The answer seemed clear to me, you wanted to develop a coarse-scale heterogeneous processors with multiple accelerators for the various expensive mathematical functions and small CPUs for the busy work.  This was followed by many years of trying to get some funding organization to support this work.  I tried DARPA, IARPA, and other government organizations, but the best response I got was, that’s a great idea come be a program manager and start a program on it.  Maybe I should have taken them up on it, but I wanted to do it, not watch people do it.  I had other technical interests, too.

Distributed Coherence – I never let the wonderful Paul Kolodzy forget that I first tried to get a program started in May of 2000, when he was a DARPA program manager.  Amusingly, I saw portions of that slide deck show up every few years even 20 years later, so things do take on lives of their own.  As a side project, I worked on techniques and channel phenomenology [D. W. Bliss, et al., “MIMO wireless communication channel phenomenology,” IEEE Transactions on Antennas and Propagation, 2004] for a number of years after that.  I developed space-time-frequency beamformers for both transmit and receive that enabled operation with distributed systems [D. W. Bliss, et al., “Transmit and receive space-time-frequency adaptive processing for cooperative distributed MIMO communications,” Acoustics, Speech and Signal Processing (ICASSP), 2012.].  A little after 2010, I met Mark Rich at DARPA.  I met with him serval times and discussed both advanced processor design and distributed coherence.  It is hard to describe Mark.  He was kind, smart, and was better at getting programs off the ground than anyone else that I have known.  I learn much from him, and I still miss him.  After several months of discussions, Mark found a path at DARPA for starting a program that had a bit of both, the DARPA CLASS program in 2011.  I was the technical lead at Lincoln for the program.  The system employed a distributed set of radios that coherently employed a space-time transmit beamformed signal to an antenna-array receiver that both received the signal and mitigated any interference with an adaptive space-time beamformer.  We designed and built a SoC for the receiver that estimated the channel and joint transmit and receiver space-time beamformers which was sent back to the transmit array through a side channel.  It was an amazingly cool system.  I did not get to see the end of the program, because during the program, I moved to ASU, but I received occasional updates.  Adam Margetts took over the technical leadership and shepherded it to completion.  It worked pretty much the way we expected, which surprised many.

Academia – I had wanted to be a professor since I was in the fourth grade.  When I left particle physics for engineering, I thought I had sealed my fate, and that I would never be a professor.  At some point, I happened to work with University of Michigan’s stunningly brilliant Al Hero, and he suggested that there was a path for me to do it.  So, again it was an option.  I spent the next decade trying to set myself up for getting a position.  For a variety of reasons, moving from MIT Lincoln Laboratory to academia was not easy, at least for me.  To start, leaving Boston was not popular at home.  I must thank my brilliant spouse, who was a group leader at Lincoln, for agreeing even though there was a significant career cost to her.  Also, I had an electrical engineering undergrad and physics PhD, it did raise initial concerns.  I had been working for 15 years as an electrical engineer, so clearly EE was the right path.  Another issue was probably the somewhat short-sided academic instinct of chasing technical fads.  Each year, schools frantically try to hire in whatever is hot.  The hot topic changes every couple of years.  I have watched it go from wavelets, to compressed sensing, to smart grids, to machine learning.  While universities should work to stay relevant, the overreaction leads to missing out on good faculty hiring opportunities and sometimes strange balances of faculty.  I was told by someone I knew who was on the faculty search committee of a university to which I applied that they were interested, but they did not know what to do with me.  I was too senior to be hired as junior faculty and lacked the faculty experience to be hired as senior faculty.  They could not figure out how to hire me.  I suspect that this happened more than once because I would get initial signs of interest, but then nothing.  I must thank our great school director Steve Phillips, because he did not think it was that complicated.  They offered me an untenured associate professorship with an accelerated clock to tenure.  I started at ASU December 2012.

Moving to ASU – It is strange that I came back to ASU.  Long ago, when dinosaurs roamed the Earth, I did my undergrad at ASU.  It was a great place to do my undergraduate degree, but at the time, it was not a serious research university.  I had been thinking about going to academia and had explored it a few times.  I made a list of schools that I thought were serious research schools, but ASU was not on the list.  I was doing one of those random web walkabouts thinking about academia, and I ran across an article about ASU research.  I was stunned.  ASU was simply not the same university that I had attended.  There are a variety of factors that enabled this change, which includes the economic growth of Arizona, but certainly the vision and drive of ASU’s president Michael Crow was key.  While it may not be the best metric, an important metric is external research expenditure.  Over the course of two decades, ASU went from a few tens of millions of dollars of research to $800 million.   

RF Convergence – Because of my historical interests in radar, communications, and software-defined RF systems, it is probably not a surprise I became interested in multiple-function RF systems.  You can tell that a field is quickly evolving because it has had several names over the last several years: spectrum sharing, integrated sensing and communications, and – because I had an ONR sponsor who told me to call it this – RF convergence.  Starting in 2013, I had the opportunity to work on the DARPA SSPARC program.  During this program, I developed the idea of estimation information rate for a dynamic target, so that I could put communications and radar on a similar footing and produce a rate-rate bound plot similar to the communications multi-access bound [D. W. Bliss, “Cooperative radar and communications signaling: The estimation and information theory odd couple,” IEEE Radar Conference, 2014.].  Estimation information rate can be interpreted as the data rate required to keep up with what is learned about a target under ideal source compression.  When I first developed the approach and made the rate-rate bound plot, I yelled from my home office to the family “hey, I know what I am working on for the next few years.”

WISCA – I knew when I started at ASU that I wanted to build a mechanism to go after bigger programs that went from theory to building chips.  In 2015, I started trying to form the ASU’s center for wireless information systems and computational architectures (WISCA).  If was officially established in July of 2016.  It is probably my Lincoln training, but maybe it runs deeper than that.  I firmly believe that to make significant progress you need to span fundamental theory, to algorithms, to implementation, to computations, to building fieldable systems.  Building domain-specific SoCs is often the right answer.  I would say it has worked.  We have run nearly $50 million in programs through WISCA.

My more recent work probably deserves more discussion, and I have worked with many wonderfully creative and brilliant people within my lab and externally, but I will leave a deeper discussion for another time, and quickly review these topics, for now.

Cardiac Radar – I was talking to some colleagues that I knew at Nokia Research.  They were interested in physiological monitoring by using small-scale radars.  In 2017, the provided a small gift to the lab which we used to start exploring.  At a low level, we have been getting better and better at extracting physiological signals by using radar ever since.  At this point, in the lab, I think we have every small-scale radar ever made.  We explored capabilities and phenomenology from low-frequency impulsive radars (couple GHz) to higher frequency (77 GHz) MIMO systems driven by automotive radar technology.  We developed novel signal processing chains that allow for the extraction of breathing, heart rate, R-to-R intervals, and – in the best case scenarios – even the sound of the heartbeat.  This is, in effect, remote radar “stethoscope,” which is pretty cool [Y. Rong, I. Lenz, D. W. Bliss, “Non-Contact Cardiac Parameters Estimation Using Radar Acoustics for Healthcare IoT,” IEEE Internet of Things Journal, 2023].  To aid increasing the impact of our research, we have spun out a startup, Big Little Sensor Company.

More Distributed Coherent Systems – As I have previously mentioned, I have had a long-term interest in distributed coherent systems.  In theory, by using several simple radio systems, you can construct the capabilities of a system with a large antenna array.  Working with Airbus, we had the opportunity to develop an interesting variant of a distributed coherent system.  They were looking for robust, accurate positioning technologies for urban air mobility (flying cars), but they would only allow us a relatively narrow bandwidth [A. Herschfelt et al., “Joint Positioning-Communications System Design and Experimental Demonstration,” IEEE Digital Avionics Systems Conference (DASC), 2019], [H. Yu, et al., “Communications and High-Precision Positioning (CHP2): Hardware Architecture, Implementation, and Validation,” Sensors 2023], [D. W. Bliss, “Phase-accurate vehicle positioning systems and devices,” US Patent 11,719,807].  Typically, if you want better position accuracy, you use more bandwidth as the estimation error is proportional to the speed of light divided by the bandwidth.  We developed the theory, algorithms, waveforms, and a demonstration system that did joint communications and MIMO time-of-arrival estimation.  We did carrier-phase-accurate timing exchange between two 4 antenna systems.  While a naive expectation given our bandwidth would provide about a 15 m range resolution, we could get timing variance down to sub-cm without any external information.  Because we had MIMO arrays, we could use the relative phase to get angle information, too.  There were a wide range of technical challenges.  We stressed the capabilities of the FPGAs that we were using to do the calculations.

We continued to develop distributed coherent array processing techniques.  We built an experimental system based upon arrays of SDRs for a range of applications.  We have demonstrated multiple approaches [J. Holtom, et al., “Distributed Beamforming Techniques for Flexible Communications Networks,” Asilomar Conference on Signals, Systems, and Computers, 2021].  On the DARPA RNDMC program, we continued to make progress, and more recently on the Space Force OpEn-DisCo program.  This is an example that technological innovation takes decades.

Next Generation SoCs – Not long after starting at ASU I had an opportunity to work again with Mark Rich as he had moved to Google.  On the radio revolution program, we worked through a range of ideas on how to develop domain-specific processors.  The idea was to replace the current transceiver and modem processor of cell phones to make them more flexible and upgradable.  We made some progress, but for reasons known only to Google, once it looked like we were starting to make real progress, Google cancelled the project.  However, this then put us in a good place both technically and in building a collaboration to propose the DASH processor [A. Venkataramani, et al., “The DASH SoC: Enabling the Next Generation of Multi-Function RF Systems,” Asilomar Conference on Signals, Systems, and Computers, 2022.] for the DARPA Domain-Specific SoC (DSSoC) program.  In starting the DASH project, I had the chance to work with the visionary Tom Rondeau – who started the DSSoC program and has since moved into OUSD(R&E).  We are currently working towards a fieldable version of the SoC.  Because of our success on the DASH program, we were able to leverage some of its technology for the DARPA Space-BACN program, on which we are developing a flexible modem processor for space optical communications.  Finally, we have recently started on the DARPA PROWESS program.  To aid in the commercialization of the chip development programs, I founded the company DASH Tech Integrated Circuits [dashtechic.com].  

Daniel W. Bliss