UCONN

UCONN
UCONN

History of Machine Learning

History of Machine Learning



It is perceived that Computer Science and related fields of study are very

new, within the past 30.


Machine learning essentially is a name for a set of algorithms that

fit functions to complex data to make predictions.


Late 1700s and early 1800s. Baye’s Theorem, a Problem in the Doctrine

of Chances (1783).

Bayes' theorem gives a mathematical rule for inverting conditional probabilities,

allowing us to find the probability of a cause given its effect.


Used to invert the probability of observations given a model configuration to

obtain the probability of the model configuration given the observations.


In the 1940s and 1950s, several machine-learning algorithms were discovered.


Neural Networks, a common paradigm in modern machine learning that involves

representing a function as a series of neurons, were first conceived in 1943.


A logical calculus of the ideas immanent in nervous activity,

by Warren McCulloch and Walker Pitts. 



Neural networks are presented as a model based on biology. that is the foundation

of this section of machine learning:

One neuron is a very simple mathematical function, and the complexity that

makes neural networks worthwhile comes from combining a large amount

of these neurons and forcing them to interact.

 

First neural network programs, which played checkers, were created in 1952

by Arthur Samuel at IBM  The perceptron, a model by which one can fit a

decision boundary to classify data into different groups, was invented by

Frank Rosenblatt in 1957 at Cornell .



Neural networks themselves were created and studied before probabilistic

inference, a method widely used to create and use neural net models,

was discovered by Ray Solomonoff in 1964 and published in his paper

A formal theory of inductive inference. 


Because the computational cost of training reliable neural networks was

expensive at the time. However, other forms of machine learning were

studied and improved upon. 


Terry Sejnowski invented a program that could learn to pronounce words

like a baby in 1986 (Timeline of Machine Learning). 


Research in neural networks, started in 1997, with the discovery of LSTM

(long short-term memory) neural networks, by Sepp Hochreiter and

Jürgen Schmidhuber .


These networks, unlike those of the past, have a sense of long and

short-term memory, which can aid in training for certain kinds of classification tasks. 


In 1998, the MNIST database, a collection of handwritten digits that have

widely been used for benchmarking different classification algorithms from

its conception, was created in 1998 (LeCun).


The late 2000s and 2010s saw neural networks finally begin to come

mainstream, starting with ImageNet, created in 2009, which provided

a massive database of images for training models to recognize thousands of objects.


One specific application of Google Wavenet is text to speech--the network

is so accurate that it may be difficult for some people to discern between

output of this network from a real voice (WaveNet: A Generative Model for Raw Audio).


Machine Learning


Large Language Models


ChatGPT


No comments:

Post a Comment

Assignment #12 due 12/12/25

  Build 4 graphs using machine learning - linear regression I want two separate publicly traded companies e.g. AAPL & AMZN Linear regres...