This section describes the research and application of AI technologies in various academic, industry and government organizations. AI use ranges from Machine Learning to Artificial Neural Networks. More details of these and other applications can be found on the Links Page.
Automated Machine Learning (AutoML) is a branch of deep learning applying machine learning algorithms with the goal of reducing the amount of human effort. This high degree of automation in AutoML allows non-experts to make use of machine learning models and techniques without requiring them to become an expert in this field first.
The Atom-Zero research project is pushing that definition beyond just automating a few of the predetermined processes. Atom-Zero is aiming toward a machine learning algorithmic process creating, controlling and producing results entirely within the machine itself. Atom-Zero research process randomly created 100 simple mathematical models and then selected those that produced the best results as compared to those hand-designed by data scientists. The best result was designated the "parent" which is mutated (see Darwinian evolution) was process through many cycles (24/7). Over time the self-generated algorithm results were better at preforming a task than traditional methods. Much work is needed, but If this approach can be scaled up to successfully complete more complex tasks, it will not only reduce the need for the limited supply of Data Scientists, but gain one small step toward AGI.
Symbolic AI (aka rule-based AI) was the early (mid-1950's to late 1980's) dominant and successful model of AI, and was considered the road to AGI. Symbolic Artificial Intelligence makes use of data that represents real-world objects or concepts to make intelligent decisions based facts and rules. This approach mirrored the human ability to understand the world by forming internal symbolic representations and creating rules for dealing with these concepts. Symbolic AI remains in place today in expert systems solving rule-based, real world tasks, considered too complicated or too routine for humans. In the early 2010's, Symbolic AI was eclipsed by Neural Networks.
Neural Networks contain algorithms that attempt to recognize patterns and relationships in data sets also similar to the way the brain operates. Training a Neural Network requires presentation of thousands of objects (principally pictures) until the system recognizes the object and would even be able to create similar objects that do not exist. Neural Networks, an instance of machine learning, can adapt to changing input, can change as it learns to solve a problem, and can be trained and improved with each example. Neural Networks found their niche in self-driving cars, facial recognition, etc. Until recently this approach of the two was considered to be the best approach to AGI.
Linking these two AI concepts (Neural Networks and Symbolic AI) seems counter intuitive. But researchers using neuro-symbolic modeling, in what is considered one of todays most exciting AI areas, are working to bring the ideas together, i.e. combining rules and patterns. Neural Networks break entities into symbols while Symbolic AI algorithms incorporate data into rules and deep learning. A combined approach would identify not only the shape and color of an object (Neural Network), but the area, length, width, and volume (Symbolic AI). An AI approach that brings them together will be able to tackle a number of problems such as the complicated tasks involving self-driving cars today and translating text into dozens of different language languages.
OpenAI is an AI research and distribution machine learning nonprofit founded by Elon Musk, Microsoft, among others. OpenAI's stated goal is to promote and develop "friendly AI" for the benefit of humankind.
In June 2020 OpenAI released GPT-3, a powerful general purpose text generation Application Programming Interface (API). During the initial Beta API model testing, a significant step toward AGI text generation capability has occurred.
Testing results are controversial, potentially dangerous, but show beneficial applications. GPT-3 was resourced from massive amounts of web documents, trillions of words, C++ and other code tutorials, and studies of statistical word patterns. Initial feedback from a limited number of API testers has been overwhelmingly (if not breathlessly) positive. For example: an incomplete health care investment memo fed to GPT-3 generated additional, comprehensible, logically written paragraphs on regulatory hurdles, risk, and long-term strategy; another testers assumed completed essay on "How to run an Effective Board Meeting", resulted in a GPT-3 generated three-step process on how to recruit board members. On the potentially damaging side, all of GPT-3's near-human quality writing ability and the open access properties are also available for malicious use such as misinformation, phishing, hacking, creating spam, fake news, "researched journalism", advertising, and propaganda.
There are many potential benefits and GPT-3 is not AGI, but it is an intriguing step forward.GPT-3 has been described as having a disruptive potential comparable to that of blockchain, a major step forward in knowledge manipulation, and as the iPhone of AI, opening the door to countless commercial application. But, in AGI terminology, GPT-3 lacks one essential human characterization understanding. GPT-3 can create, from an enormous data base, syntactically correct and logical sentences that make sense to us, but it does not understand what it has written or why.
Open AI released ChatGPT Enterprise in August 2023. This platform creates the potential for businesses to create a ChatGPT Algorithm working against a specified business data base. However, allowing ChatGPT to work against a specific business data base raises questions and concerns. Essentially, can a business comfortably share their data with ChatGPT and possibly, the rest of the internet?
To avoid this concern, OpenAI has introduced a new tier of ChatGPT service. Specifically, "Customer prompts and company data will not be used for training OpenAI models, data at rest and in transit will be encrypted, and the platform is System and Organization Controlled (SOC) 2 compliant."
Systems that connect external technology directly to the human brain and allow humans to control or communicate to the technology are present today. This brain-computer interface or BCI, has many applications. One of many examples allows people with mental awareness, who are physically locked down, to communicate electronically by means of metabolic or electrical activity in the brain picked up by electrodes stuck to the surface of the brain. BCI use in this sense also occurs in rehabilitation and control of prosthetics.
Elon Musk's new startup, Neuralink, claims to be working on providing a direct link between computer systems and the brain. Researchers claim to have produced a form of "thread" (finer and smaller than a human hair) which, when injected into the brain, can monitor neuron activity and could hypothetically detect and analyze thought. When proven acceptable to human use, Neuralink will not begin to read our minds with AI but may well impact the medical profession with significant research and prospective treatment of diseases such as Parkinsons and dementia.
The next step beyond Brain-Computer Integration is brain-computer integration, i.e. part human, part robot: cyborgs. While still science fiction, progress is being made. In order to complete the merging of a human mind with an AI capable computer, the composition of the interface would need to be as non-disruptive as possible. The human brain is quite delicate and does not tolerate intrusive imbedding of gold, silicon or steel electrodes or injected threads which cause scarring when implanted. Brains are organic. A cyborg model would require a non-intrusive "implant" in the brain to allow wireless 2-way electronic connection to an AI-capable computer. If (when) possible the goal would be to provide communications in both directions: AI power, knowledge and internet information flowing to the brain; control and commands flowing to the computer.
In August 2020, Dr. David Martin and researchers from the University of Delaware announced that they are working with a commercially available electrical conducive polymer (3,4-ethylenedioxythiophene) or Pedot, which has the necessary properties needed to potentially integrate electronic hardware with human tissue without scarring. The latest research is already using Pedot film to electronically stimulate blood vessel growth after injury. In Sweden Magnus Berggren, a professor of organic electronics at Linkping University, leads a research team that removed the roots from a garden rose and placed it in a solution of Pedot. Several days later, the polymer was soaked up by the plants xylem network and solidified as a gel that actually formed a set of slender conductive wires that was technically an organic electronic circuit. The xylem wires were formed into a transistor - a primary base for computing and electronics. Sometime in the future research might follow this path using Pedot as the integration connector between the brain and the computer and creating a cyborg.
BCT LLC
10810 Guilford Road, Suite 111 | Annapolis Junction, MD 20701