Results 1 to 2 of 2

Thread: NASA collision avoidance system saves unconscious F-16 Pilot

  1. Header
  2. Header-68

BLiNC Magazine, always served unfiltered
  1. #1

    NASA collision avoidance system saves unconscious F-16 Pilot

    NASA collision avoidance system saves unconscious F-16 Pilot
    by Matt Kamlet, Public Affairs for AFRC News
    Edwards AFB CA (SPX) Sep 21, 2016



    The U.S. Air Force's F-16D Automatic Collision Avoidance Technology, or ACAT, aircraft was used by NASA's Armstrong Flight Research Center and the Air Force Research Laboratory to develop and test collision avoidance technologies. Image courtesy NASA and Carla Thomas. For a larger version of this image please go here.

    Two pilots who credit a NASA-supported technology with saving one of their lives during a May training exercise mishap paid a visit to NASA Armstrong Flight Research Center in Edwards, California, to meet with some of the very engineers responsible for its development. A United States Air Force Major and F-16 flight instructor, and a foreign Air Force pilot student, spent an afternoon at the NASA center, as guests during the center's 2016 NASA Honor Awards.
    The pilots spent the day with NASA Armstrong center director David McBride, project manager Mark Skoog, and several other engineers and managers responsible for developing and advancing the Automatic Ground Collision Avoidance System, or Auto-GCAS. Both pilots say that without the system, developed in part by NASA, one of them would not be alive today.
    Auto-GCAS is an aircraft software system that activates upon detecting a collision course with the ground. It warns the pilot, and if imminent collision with the ground is determined, it locks the pilot controls and performs an automatic recovery maneuver, returning full control back to the pilot once the aircraft has cleared the terrain.
    "There have been numerous accident reports over the years where it's been pilot error," explained the flight instructor, who graduated from pilot training in 2007 and now teaches young pilots how to fly F-16s. "That's one of the things that frames my discussion with a lot of the young students that I teach, is that your chances of dying in combat are up there, it's a dangerous thing. But most F-16 pilots over the years die in training accidents."
    The Tucson Guard had been conducting a standard training scenario, known as basic fighter maneuvers, or BFM, in F-16s. For the student, it was his first high-aspect BFM flight. In essence, the scenario was designed for the student to fly a head-on pass with the instructor, with both aircraft flying directly at each other initially. Then, once they pass, or "merge," each pilot tries to out-maneuver the other. The exercise is meant to train pilots in maneuvers necessary for aerial combat, and requires three dimensional maneuvering under high g.
    Following the pass, the student banked his F-16 and began maneuvering, pulling more than 8 g. It was at this time that he experienced what's known as a g-induced loss of consciousness, or G-LOC, and fell unconscious.
    The aircraft, meanwhile, continued to bank, rolling to approximately 135 degrees, allowing the nose to start slicing and causing a steep dive toward the ground. The situation was especially perilous since the student, having intended to maneuver with high gravitational force, had advanced his throttle to "full afterburner" and significantly increased his aircraft's thrust.
    Continuing to accelerate, the aircraft began to plummet toward the ground, eventually reaching supersonic speed at Mach 1.12.
    Meanwhile the instructor had noticed the anomaly, and began calling for his student to "recover, recover." With no response, it was clear that the pilot was in a G-LOC situation. The instructor maneuvered to fly behind the distressed aircraft. However, the student's F-16, flying at supersonic speed, pulled away and beyond visual line of sight.
    "By the final 'recover' call, I'm basically just hoping that he recovers, because I'd lost sight of him at that point," the instructor said. "I was really hoping I wasn't going to see any sort of impact with the ground."
    Just as the instructor made his third and final "recover" call, the Auto-GCAS in the student's aircraft activated, rolling the aircraft to a safe, upright position, and performed an automatic, stabilizing pull-up.
    The pilot regained consciousness and promptly pulled back his throttle to "idle" speed.
    "My memory is that I started the fight and then I could see my instructor and the next thing I remember is just waking up," the pilot recalled. "It feels weird because I think I'm waking up from my bed. In my helmet, I can hear him screaming 'recover, recover' at me and when I open my eyes I just see my legs and the whole cockpit. It doesn't really make sense.
    "I got up over the horizon pretty fast again. It's all thanks to the Auto-GCAS system, which got me out of the roll and started the recovery for me."
    Ultimately, the aircraft recovered at approximately 3,000 feet above the ground. This is high for where Auto-GCAS would have normally performed the recovery, but the system, assuming the throttle would remain at its current position with full afterburner, and that the pilot would remain unconscious, calculated an increase in the amount of altitude required for recovery.
    "About maybe 30 seconds to a minute after I had gotten everything under control again," remembered the student. "The first thing I thought about was my girlfriend, and then my family, and then my friends back home, and the thought of them basically getting a call (that I had perished)."
    Following the potentially tragic incident, the student followed specific instructions from his instructor, was able to land his aircraft safely, and was promptly attended to by medical personnel.
    The development of Auto-GCAS goes back over 30 years, first flying at Edwards Air Force Base as a collaboration between NASA, the Air Force Research Lab, AFRL, and Lockheed Martin. The program was originally included as a test safety system to allow for other requested testing to take place. Testers quickly took note of the potential of Auto-GCAS, and agreed that it may hold broader-reaching ramifications than the primary test systems.
    However, Skoog, who has worked with autonomous systems since the beginning of his career, says that the system was met with initial opposition including from the fighter pilot community.
    "There were some instances where we saw families of pilots who'd been lost in mishaps and we knew that it could be prevented," Skoog said. "It was very challenging. There's a personal burden and a clear moral responsibility to get the message out to the decision makers so that they can properly administrate funds to bring this kind of potential life-saving technology forward."
    Auto-GCAS was eventually incorporated into the Fighter Risk Reduction program and was subsequently fielded on the F-16 in September of 2014. Since then, the system has prevented at least four confirmed aircraft situations that could have resulted in loss of life.
    "After having gone through so much initial resistance from the pilot community, to now, where just weeks after its implementation there was a complete reversal in pilot opinion," Skoog said. "They are finally seeing what we in the test community saw for a long time."
    For the student, the system, he says, made all the difference in his life.
    "This was an isolated incident for me, but, from the bottom of my heart, I just want to say thank you to everyone who has been a part of developing the Auto-GCAS system," he said. "It's everyone, not just engineers, but politicians and people just trying to get the ball rolling on having the Air Force use it. They are the reason that I am able to stand here today and talk about it. I'm able to continue to fly the F-16, and I'm able to go home and see my family again. So thank you, so much."

  2. #2

    A camera that can see unlike any imager before it

    A camera that can see unlike any imager before it
    by Staff Writers
    Washington DC (SPX) Sep 21, 2016



    This artist's rendition depicts a single imaging sensor, in this case one that is aboard an unmanned aerial vehicle, simultaneously operating in three potential ReImagine modes-3D-mapping at the lower left, vehicle detection and tracking, and thermal scanning for industrial activity-in different regions of the same field of view. Today a single camera cannot do all of these things. For a larger version of this image please go here.

    Picture a sensor pixel about the size of a red blood cell. Now envision a million of these pixels-a megapixel's worth-in an array that covers a thumbnail. Take one more mental trip: dive down onto the surface of the semiconductor hosting all of these pixels and marvel at each pixel's associated tech-mesh of more than 1,000 integrated transistors, which provide each and every pixel with a tiny reprogrammable brain of its own. That is the vision for DARPA's new Reconfigurable Imaging (ReImagine) program.

    "What we are aiming for," said Jay Lewis, program manager for ReImagine, "is a single, multi-talented camera sensor that can detect visual scenes as familiar still and video imagers do, but that also can adapt and change their personality and effectively morph into the type of imager that provides the most useful information for a given situation."

    This could mean selecting between different thermal (infrared) emissions or different resolutions or frame rates, or even collecting 3-D LIDAR data for mapping and other jobs that increase situational awareness. The camera ultimately would rely on machine learning to autonomously take notice of what is happening in its field of view and reconfigure the imaging sensor based on the context of the situation.

    The future sensor Lewis has in mind would even be able to perform many of these functions simultaneously because different patches of the sensor's carpet of pixels could be reconfigured by way of software to work in different imaging modes. That same reconfigurability should enable the same sensor to toggle between different sensor modes from one lightning-quick frame to the next. No single camera can do that now.

    A primary driver here, according to Lewis, who works in DARPA's Microsystems Technology Office (MTO), is the shrinking size and cost of militarily important platforms that are finding roles in locations that span from orbit to the seas.

    With multi-functional sensors like the ones that would come out of a successful ReImagine program, these smaller and cheaper platforms would provide a degree of situational awareness that today can only come from suites of single-purpose sensors fitted onto larger airborne, ground, space-based, and naval vehicles and platforms. And with the more extensive situational awareness, Lewis said, would come the most important payoff: more informed decision-making.

    Today, DARPA posted a Special Notice (DARPA-SN-16-68) on FedBizOpps.gov with instructions for those who might want to attend a Proposers Day on September 30 in Arlington, VA, as a step toward possibly participating in the ReImagine program. In the coming days, DARPA expects to also post a Broad Agency Announcement that specifies the new program's technical objectives, milestones, schedule, and deliverables, along with instructions for researchers seeking to submit proposals.

    One key feature of the ReImagine program is that teams will be asked to develop software-configurable applications based on a common digital circuit and software platform. During the four-year program, MIT-Lincoln Laboratory-a federally funded research and development center (FFRDC) whose roots date back to the WWII mission to develop radar technology-will be tasked to provide the common reconfigurable digital layer of what will be the system's three-layer sensor hardware.

    The challenge for successful proposers ("performers" in DARPAspeak) will be to design and fabricate various megapixel detector layers and "analog interface" layers, as well as associated software and algorithms for converting a diversity of relevant signals (LIDAR signals for mapping, for example) into digital data.

    That digital data, in turn, should be suitable for processing and for participation in machine learning procedures through which the sensors could become autonomously aware of specific objects, information, happenings, and other features within their field of view. One reason for using a common digital layer, according to Lewis, is the hope that it will enable a community developing "apps" in software to accelerate the innovation process and unlock new applications for software-reconfigurable imagers.

    In follow-on phases of the program, performers will need to demonstrate portability of the developing technology in outdoor testing and, in Lewis's words, "develop learning algorithms that guide the sensor, through real-time adaptation of sensor control parameters, to collecting the data with the highest content of useful information."

    That adaption might translate, in response to visual cues, into toggling into a thermal detection mode to characterize a swarm of UAVs or into hyper-slow-motion (high-frame rate) video to help tease out how a mechanical device is working.

    "Even as fast as machine learning and artificial intelligence are moving today, the software still generally does not have control over the sensors that give these tools access to the physical world," Lewis said. "With ReImagine, we would be giving machine-learning and image processing algorithms the ability to change or decide what type of sensor data to collect."

    Importantly, he added, as with eyes and brains, the information would flow both ways: the sensors would inform the algorithms and the algorithms would affect the sensors. Although defense applications are foremost on his mind, Lewis also envisions commercial spinoffs. Smart phones of the future could have camera sensors that do far more than merely take pictures and video footage, their functions limited only by the imaginations of a new generation of app developers, he suggested.

Similar Threads

  1. Replies: 0
    Last Post: April 18th, 2015, 03:51 PM
  2. Replies: 0
    Last Post: May 12th, 2014, 01:01 AM
  3. Leg Pouch Pilot Chute System
    By mknutson in forum BASEWiki
    Replies: 0
    Last Post: June 24th, 2009, 05:42 AM
  4. Object avoidance
    By stitch in forum The 'Original' BASE Board
    Replies: 4
    Last Post: November 10th, 2006, 01:13 PM
  5. base grading system/numbering system
    By guest in forum The 'Original' BASE Board
    Replies: 1
    Last Post: June 23rd, 2001, 07:37 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •