top of page

Welcome to D5AI

Deeper Intelligence

Our Mission:
Building Tools for Responsible AI

Differences in Our Technology

​

D5AI has a world class portfolio of inventions that can be used together to make substantial advances in the state of the art with respect to sustainability, explainability, controllability, security, and safety. The difference starts with the learning coach, which combines a human team and an AI training management system. Together with targeted incremental growth and Markov decision process reinforcement learning, it achieves dual results: It enables human common sense to be applied during the training of an AI system, and it semi-automates and greatly magnifies the scope of human knowledge engineering.

​

Hybrid networks, with nodes replaced by more complex elements such as multi-output units, computation cells, and regularization links can efficiently improve the performance of a network design while making it easier to understand and control. These tools support other training methods in addition to gradient descent. They also introduce novel elements such as explainable nodes, and conditional probability models. Use of these elements and training techniques can greatly reduce the amount of computation required for training while simultaneously improving robustness against noise and adversarial attacks, thereby improving sustainability, explainability, and security.

​

Learning coach

​

In our systems, training may be actively guided and controlled by a specialized AI system called a “Learning Coach.” The learning coach may be a cooperative effort of a human team and an AI system. The AI system in the learning coach enables the human team to have much greater control over the training process than manual control. For example, the learning coach may tune a custom hyperparameter for each of millions of relationship regularization links. The learning coach may also control the addition of nodes designed to be explainable. In a hybrid network, the learning coach may also control computation cells and their data connections and interactions with nodes.

​

The learning coach facilitates other technologies such as incremental growth, specialized elements in hybrid networks, the analysis of objectives and implicit errors in individual nodes, the training of stochastic models, and active defense against adversarial attacks.

​

The human-AI cooperation of the learning coach enables much greater human control of the training process. The learning coach supports all the other techniques described here.

​

Targeted incremental growth

​

When the performance of a neural network is below expectation, the standard practice is to define a new network that is substantially larger and try again. In training our systems, incremental changes in the network architecture may be made at any time. These changes may be targeted at improving the performance of specific nodes and/or targeted at specific data items. Incremental growth may be implemented in a way that guarantees no degradation in performance. Targeted incremental growth improves the efficiency of the training process, contributing to sustainability and enables the targeted introduction of explainable nodes and conditional probability models to improve explainability.

​

Hybrid networks with specialized elements

​

Conventional neural networks are a special case of D5AI’s more general hybrid networks. An element of a hybrid network may be a more complex unit with multiple output values. For example, a unit may not only have a separate output value for each of two alternatives but also an output value for neither alternative being true and an output value for both alternatives being true.

​

A unit in a hybrid network may also have elements representing computable attributes that are easy to compute directly but not represented by node activations. The values of the attributes of a unit may be passed to other units through data links rather than network connections. A hybrid unit may contain one or more such computational cells, which may have associated memory and specified computations. A hybrid network may also have non-parametric or parametric probability models. A hybrid network may have controlled data switches. In addition to enabling more efficient computation, data switches may be used as an active defense against adversarial attacks.

​

Hybrid networks, with more complex units, enable better knowledge representation, more general training methods, and novel elements such as explainable nodes and probability models. Hybrid networks contribute to sustainability, explainability, and security.

​

Explaining individual nodes

​

Typically, most of the inner nodes in a large trained neural network multi-task such that it is difficult or impossible to explain the operation of a node in simple mathematical expressions or in human language. Our approach avoids this dilemma. Rather than attempting to explain an existing node in a fixed network, we change the question. To any node in a neural network, our system may add additional companion nodes. Each companion node may be designed to be easy to explain. A single base node may have multiple explainable companion nodes, so most of the nodes in the expanded network will be explainable. The explainable companion nodes may also be designed to require very little computation to train. Explainable nodes contribute to explainability and sustainability.

​

Conditional probability models

​

A D5AI hybrid network may also comprise explicit probability models. For language models or other networks dealing with sequences, the network may comprise models of conditional probability, which may be used as a contrast and complement to the correlation computation of the attention mechanism. Like the explainable nodes, the probability models may be designed to be trained with very little computation. Conditional probability models contribute to sustainability and explainability.

​

Regularization Links

​

Regularization links or, more formally, data-specific relationship regularization links, are unique D5AI innovation that supports many different tools. A regularization link is a unidirectional link between any pair of nodes. The link is not a connection in the neural network. A link may go from a node in a higher layer of a network to a lower layer.  A symmetric bi-directional link may be created by a unidirectional link in each direction. A link may be between a source node in one network to a receiving node in a second network. A link may be between some external source of knowledge, such as a reference book or a knowledge graph, to a receiving node in a neural network.

​

Each link imposes a regularization on the target node of the link to enforce a specified relation such is-equal-to, is-not-equal-to, is-greater-than, is-less-than between the activation values of the source node and the target node. The regularization may be restricted to a specified subset of the data. Regularization links give the learning coach, including the human team, much greater and more flexible control over the training process including adaptive training and training by human feedback. Regularization links contribute to sustainability, controllability, explainability, and safety.

​

Training methods other than gradient descent

​

Our systems may use other training methods instead of or in addition to gradient descent. Explainable nodes may be designed to be trained directly from a node-specific objective rather than from gradient descent from back propagated derivatives. Probability models may be trained directly by maximum likelihood. Piecewise constant activation functions and their subnetworks may be trained by other methods, such as linear and non-linear programming, artificial local objectives, link regularization, and/or back propagation of data examples based on errors on a local objective. The addition of training methods other than gradient descent contributes to security and safety.

​

Markov decision process reinforcement learning

​

The theory of Markov decision processes is the formal framework for reinforcement learning as used in robotics and in systems such as AlphaGo. In this memo, the phrase “Markov decision process reinforcement learning” is used rather than simply “reinforcement learning” to avoid confusion with “reinforcement learning with human feedback,” in which the reinforcement is more like that in operant conditioning in psychology and animal behavior training. The operant conditioning of learning from human feedback may be facilitated by explainable nodes and data-dependent relationship regularization links, but that is different from Markov decision process reinforcement learning.

​

Markov decision process type reinforcement learning may be used by the learning coach in exploring the effect of multiple modifications of the network being trained. The objective for the exploration may be specified by the human team or by the AI system in the learning coach. It may be a reduction of the error rate. It may be better explainability. It may be a reduction in the amount of computation. The goal of the exploration may be any combination of these and other objectives. Markov decision process reinforcement learning contributes to sustainability and security.

D5AI

©2023 by D5AI. Proudly created with Wix.com

bottom of page