RenderMatrix, Inc.

Corporate Information

Corporate Information

Founded in 2001

RenderMatrix, Inc. is an advanced research and development engineering company. Our area of expertise includes leveraging artificial intelligence in the global defense industry, private sector, entertainment technology, and advanced research.

Philosophy

Never Stop

This is our mantra. We strive to engineer disruptive technologies and services that change the way industries work. To do this we Never Stop problem solving, we Never Stop engineering solutions, and we Never Stop creative thinking.

Capabilities

RenderMatrix, Inc. employs a team of advance research engineers and software engineers in the fields of sensor system development, instrumentation/automation, autonomous vehicle engineering, and artificial intelligence research. Our engineers work in a variety of industries for our clients ranging from the global defense industry, private sector, entertainment technology, and advance research. Our areas of expertise include the following:

Advance software engineering

Artificial intelligence research

Instrumentation, controls, and automation

Development of autonomous systems

Sensor system engineering

Entertainment technology (video game development)

Past clients include:

JIEDDO

Mav6, LLC.

U.S. Army Research Laboratory

U.S. Army Night Vision and Electronic Sensors Directorate

Lurgi GmbH

Air Liquide

UP Professional Solutions

Valero

Collaborations

RenderMatrix, Inc. collaborates with several universities including the University of Memphis and Indiana University-Purdue University Indianapolis. These collaborations have led to the completion of multiple business opportunities, which allowed funding for internal university laboratories sponsoring researchers and students.

Executive Management

Dr. Joseph Qualls - CEO and President

Dr. Qualls is the founding president and Chief Executive Officer of RenderMatrix, Inc. For the past 10 years, Dr. Qualls has published numerous papers and book chapters detailing the capabilities of artificial intelligence in solving problems for the global defense industry, private sector, and entertainment technology. Dr. Qualls earned his Bachelor of Science in electrical engineering in 2001 from Christian Brothers University; Master of Science in electrical and computer engineering in 2008 from the University of Memphis; Doctor of Philosophy in computer and electrical engineering in 2011 from the University of Memphis.

Research and Engineering

Research and Engineering

  • Global Defense Solutions
  • Advanced Research

Social Media

Social Media

Contact

Contact

Global Defense Solutions

Our team of engineers and researchers focus on developing products and solutions that enhance the situational awareness of the warfighter and increase the effectiveness of commanders making decisions for large numbers of warfighters and military equipment. Our products leverage artificial intelligence as the core system component for autonomous decision-making. RenderMatrix, Inc. has researched and engineered products for JIEDDO, Mav6, LLC., and many others. Some of our products include the following:

Common IED Exploitation Target Set (CIEDETS)

Although extensive information about improvised explosive devices (IEDs) and tactics have been acquired, the knowledge and data are unstructured and are not based on a formal semantic model. The knowledge and data consists of various independent sources of information, including device properties and emplacement scenarios, organized with widely disparate standards, and it is maintained by a diverse set of user groups. The lack of a formal semantic model limits the sharing of information and it impedes the consistent testing of new sensor systems to counter the IED threats.

RenderMatrix, Inc. in collaboration with the University of Memphis has developed the Common IED Exploitation Target Set (CIEDETS) ontology for Mav6, LLC. The Common IED Exploitation Target Set (CIEDETS) ontology provides a comprehensive semantic data model for capturing knowledge about sensors, platforms, missions, environments, and other aspects of systems under test. The ontology also includes representative IEDs; modeled as explosives, camouflage, concealment objects, and other background objects, which comprise an overall threat scene. The ontology is represented using the Web Ontology Language and the SPARQL Protocol and RDF Query Language, which ensures portability of the acquired knowledge base across applications. The resulting knowledge base is a component of the CIEDETS application, which supports the end user sensor test and evaluation community. CIEDETS associates a system under test to a subset of cataloged threats based on the probability that the system will detect the threat. The associations between systems under test, threats, and the detection probabilities are established based on a hybrid reasoning strategy, which applies a combination of heuristics and simplified modeling techniques. Besides supporting the CIEDETS application, which is focused on efficient and consistent system testing, the ontology can be leveraged in a myriad of other applications, including serving as a knowledge source for mission planning tools.

Autonomous Decision Enhancing Situational Awareness (ADESA)

Network-centric operations have seen the rapid development and deployment of ubiquitous assets, such as unmanned vehicles, sensor systems, and algorithms to enhance the situational awareness and effectiveness of the warfighter. Controlling and tasking the assets has remained a challenge due to the number of assets, operational man hours to control the assets, analysis of data, and chain of command decision making time. Fast and intuitive operational command and autonomous control of the assets is paramount to the changing demands of missions within multiple theaters.

RenderMatrix, Inc. is performing research and development on a project called ADESA (Autonomous Decision Enhancing Situational Awareness). The main principle behind the research is the leveraging of artificial intelligence, including ontologies and rules. The research and development has led to an autonomous system that allows multiple users to create and execute high-level missions while the artificial intelligence of ADESA handles planning, re-planning, control, and information display of assets. Depending on the type of assets assigned to a high-level mission, ADESA displays real-time video of the operation, text based information reports, 3D visualizations of operations, notifications of interest, and mission completion.

ADESA allows users to interact with its intuitive interface using a variety of hardware platforms, including mobile touchscreen devices. Warfighters are able to control assets and view live data from the battlefield using tablet computers and other mobile devices. Members of the intelligence community are able to task assets and monitor live data from across the globe. ADESAs use of advanced artificial intelligence removes the burden of low-level mission planning from its users, freeing them to concentrate on activities that are more important.

Lethal Autonomous Warfighter System (LAW)

Warfighters must have access to lethal-autonomous drones that are portable and reusable to increase capability, efficiency, and effectiveness. Currently, warfighters have access to portable drones that allow for direct-targeted operations against a threat via remote control positioning and detonation over the threat. The detonation means the drone is a one-time use and current systems only allow the warfighter to carry one drone on most missions.

RenderMatrix, Inc. is developing a new lethal autonomous warfighter (LAW) drone system that is reusable and portable. The LAW system is a set of small drones that are capable of tracking and firing small arm rounds at a threat. The small drones have the capability to coordinate flying with flocking algorithms, target recognition, and stabilization algorithms for rapid firing of small arms. Other capabilities will include human in the loop for flight controls and target identification. The LAW system enables significant increases in warfighter mission effectiveness, monetary savings due to reuse, and most importantly allowing the warfighter to stay better protected during operations, thus saving lives.

Sensor Atom Configuration Software (SACS)

The bulk of commercial sensor system design is devoted to improving image quality through models and metrics such as resolution, sensitivity, and color reproducibility. For networks of multimodal sensors, such models and metrics do not currently exist. Models and metrics used to design traditional commercial and military sensor systems are not directly applicable. The traditional models and metrics cannot address or even describe the possible synergy between geographically separated sensing elements in a network. The lack of metrics and design methodologies for sensor networks raises the cost of development and produces sub-optimal performance of sensor networks, thus lowering effectiveness.

To address the lack of models and metrics for developing sensor networks, RenderMatrix, Inc. has been researching and engineering a new software system called Sensor Atom Configuration Software (SACS). In short, SACS allows an engineer to design a sensor network and test the sensor network in specific environments against specific targets. SACS determines if the sensor network is capable of detecting and classifying the specific target. SACS leverages well-defined semantics, sensor atom concepts, and design metrics to help determine the performance of a sensor network within a given environment against a specific target.

Advanced Research

RenderMatrix, Inc. in collaboration with university partners, such the University of Memphis and Indiana University-Purdue University Indianapolis, employs advanced research engineers, professors and graduate students to theorize and creatively solve the problems of today and predict future problems that may occur along with solutions.

Autonomy in sensor networks

Sensor networks have increased in complexity over the last decade due to application needs and enabling technology. For example, the need to monitor vast geographic regions comprising warzones, disaster areas, or other areas of interest have increased the number of sensors and datasets that must be effectively managed. In addition, an exponential increase in new sensor types are placing increased demands on middleware to support communications, control, and interoperability. These networks typically consist of sensors, sensor platforms, fusion algorithms, and the underlying network-centric computing infrastructure.

Dynamically discovering, matching, and integrating sensors and compatible algorithms to form a synthesis of systems that are capable of satisfying subtasks of high-level missions poses a significant challenge for network-centric architectures. Compounding the challenge is the lack of knowledge and data models used to describe the relationships among sensors, algorithms, and missions. Most algorithms are designed for specific sensor systems in anticipation of performing a specific task. Designing and deploying tightly integrated systems limits their potential reuse for new, unanticipated tasks without re-engineering the systems.

A novel ontological problem-solving framework has been designed by RenderMatrix, Inc. that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. The prototype system was used as the basis for the ADESA project being developed at RenderMatrix, Inc. for defense and private sector clients.

Profiling Sensors

The University of Memphis and RenderMatrix, Inc. are developing a family of profiling sensors in collaboration with the U.S. Army Research Laboratory and the U.S. Army Night Vision and Electronic Sensors Directorate. Each of these sensors shares a common design theme of using a sparse detector array as compared to traditional imagers, which have a dense focal plane array. The approaches range from an active, near-infrared (near-IR) profiling sensor, to a passive pyroelectric device. Silhouettes are generated by the near-infrared profiling sensor and are then classified as a human, vehicle, or animal using an algorithm. Several algorithms have been applied and analyzed for accuracy, including the Naive Bayesian, Soft Linear Vector Quantization, and others.

DefibViz

The software tool DefibViz (defibrillator visualization) has several research goals to help improve defibrillator design. These goals include facilitating the understanding of how electrode properties and placement affect the voltage gradient distribution throughout the torso and to determine optimal electrode placement to maximize defibrillation efficacy. Both geometric rendering and interactive exploration of volume data techniques are exploited. DefibViz includes use of three-dimensional (3-D) slice plane widgets such that the distribution of the voltage gradient induced by a simulated shock can be visually inspected throughout the heart and torso.

Neural Networks for autonomous cooperation in game technology

Most agents in computer games are designed using classical symbolic artificial intelligence (AI) techniques. The AI techniques include production rules for very large branching and conditional statements, as well as search techniques, including branch-and-bound, heuristic search, A*, and planning techniques, such as STRIPS (Stanford Research Institute Problem Solver). Also, situational case-based reasoning, finite-state machines, classical expert systems, Bayesian networks, and other forms of logic, including predicate calculus and its derivatives, such as description logics, form the foundation of many game agents that leverage AI techniques. The game agents are typically created with a priori knowledge bases of game states and state transitions, including mappings of the world environment and the game agent's reactions to the environment and vice versa. Attempting to determine a priori every game state that the agent will face is a daunting task. Even for relatively simple games, over 20,000 possible states exist, which limits the applicability of some techniques. Classical AI techniques can become very complex to create, maintain, and scale as the possible game states become more complex. In many cases, since it is not feasible to plan for every event in a game, the agents have a very limited perception of the game. Neural networks have the ability to overcome some of the shortcomings associated with the application of many of the classical AI techniques in computer game agent design. Neural networks have many advantages, including being self-adaptive in that they adapt well to computer game environments that change in real-time.

RenderMatrix, Inc. researched and developed custom neural networks for the computer game called Defend and Gather. Defend and Gather is a computer game that incorporates classical symbolic AI techniques in agents that play against neural-based game agents in a contest to determine which type of game agent will win the computer game. The goals of the game agents are in direct contradiction to one another to ensure confrontation between the agents. The classical symbolic AI agents are to hunt and destroy the neural-based agents and defend the resource points. One neural-based agent is to search for the resource points while avoiding the classical symbolic AI agents while the other one hunts the classical symbolic AI agents. The two separate neural networks where trained separately of each other (no a prior knowledge). The idea was to see if the two neural networks would learn to cooperate with one another during game play to win the game. In short, the neural based agents won over 80% percent of the time over increasing difficult levels compared to 40% of human players playing as the same agents.

Neural networks have proven themselves viable for agent design, but there are still many unexplored avenues that could prove to benefit from neural networks in computer games. The area of content generation has only briefly been discussed in recent research. The potential is that neural networks could generate entire worlds or even entire computer games based on human players' preferences. Neural networks have great potential for designing computer games and technology that will entertain players in terms of newly generated content and increasing challenge as the players learn the game.