What is machine learning?

Copy URL

Machine learning (ML) is a subcategory of artificial intelligence (AI) that uses algorithms to identify patterns and make predictions within a set of data. This data can consist of numbers, text, or even photos. Under ideal conditions, machine learning allows humans to interpret data more quickly and more accurately than we would ever be able to on our own. Machine learning is derived from mathematical foundations that enable algorithms to learn from data, make predictions, and optimize models.

Explore Red Hat AI

Artificial intelligence develops when humans synthetically create a sense of human-like intelligence within a machine. For machine learning, this means programming machines to mimic specific cognitive functions that humans naturally possess, such as perception, learning, and problem-solving. 

How do you get a machine to think like a human? You train it to create its own predictive model. This predictive model serves as the means in which the machine analyzes data and ultimately becomes a "learning" machine. To initiate this process, you’ll need to provide the computer with data and choose a learning model to instruct the machine on how to process the data. 

Bring ML practices to your organization

A machine learning model can ultimately use data to serve 3 functions:

  • Describe what happened
  • Predict what will happen
  • Make suggestions about what action to take next

The learning model chosen to train the machine is dependent on the complexity of the task and the desired outcome. Machine learning is typically classified by 3 learning methods: supervised machine learning, unsupervised machine learning, and reinforcement machine learning.

Supervised learning algorithms are trained with labeled data sets. This model is used for tasks like image recognition.

Unsupervised learning models look through unlabeled data and find commonalities, patterns and trends. This is used for tasks like customer segmentation, recommendation systems, and general data exploration.

Reinforcement learning models are trained using a process of trial and error within an established reward system. This style of learning is used for things like training a computer to play a game where actions lead to a win or a loss. 

Once the computer is familiarized with the way you want it to interpret data (thanks to the learning model and training data), it can make predictions and carry out tasks when presented with new data. Gradually, the computer will become more accurate with its predictions as it learns from continuous streams of data and be able to carry out tasks in less time and with more accuracy than a human could.

Build a hybrid cloud platform for AI/ML workloads

Red Hat Resources

The training phase of machine learning is when the model learns from a set of provided data. During this phase, developers aim to adjust the model’s parameters and minimize errors in its output. This is done by establishing a pipeline to pass data though the model, evaluate its predictions, and use the predictions to improve the model. That pipeline often embodies these steps:

  1. Collect and prepare data: Data is collected and then prepared by separating into training data and testing data, removing unwanted data, and randomizing for even distribution. Reducing the number of input variables or features in a dataset while retaining its essential information is known as “dimensionality reduction.”
  2. Select a model: Data scientists and engineers have created various machine learning algorithms for different tasks like speech recognition, image recognition, prediction, and more.
  3. Training: The prepared input data is sent through the model to find patterns (pattern recognition) and make predictions.
  4. Evaluating: After the training, a model’s output is evaluated against a previously unused set of data.

Tuning: Developers then tune the parameters to improve further the model based on findings from the previous evaluation step.

Common challenges during training and evaluation

A model performing well on the training data but poorly on the test data may be overfitting–learning too much from noise in the training data. A model that performs poorly on both sets may be underfitting–which occurs when it fails to learn the underlying patterns.

To ensure against overfitting the training data, a separate validation data set may be used. After each iteration, the model's output is evaluated against the validation data. Adjustments are then made to prevent overfitting. This is the application of dimensionality reduction: removing the extraneous data that can lead to overfitting. This reduction must be done carefully so as not to lead to underfitting.

To correct for underfitting, developers must add more informative features to improve the model’s ability to capture complex relationships in the data.

Data leakage occurs when information from the test set accidentally leaks into the training set, giving an unfair advantage and resulting in overestimated performance.

Tuning, new features, and more relevant data can minimize errors on future iterations. 

Neural networks are a type of algorithm used in machine learning. They are particularly suited for tasks involving complex, non-linear relationships in data. Deep learning is a subset of machine learning that uses neural networks many layers deep. These deep neural networks are well-structured to learn hierarchical representations of data. This makes deep learning extremely powerful for tasks like image recognition, natural language processing, and speech recognition.

Machine learning and artificial intelligence can be used to enhance user experience, anticipate customer behavior, monitor systems to detect fraud, and can even help healthcare providers detect life-threatening conditions. Many of us benefit from and interact with machine learning on a daily basis. Some common machine learning uses include:

  • Recommendation algorithms on your favorite streaming services.
  • Automatic helplines and chatbots.
  • Targeted ads.
  • Automated quotes from financial institutions.

Compare predictive AI vs. generative AI

Generative AI, which now powers many AI tools, is made possible through deep learning, a machine learning technique for analyzing and interpreting large amounts of data. Large language models (LLMs), a subset of generative AI, represent a crucial application of machine learning by demonstrating the capacity to understand and generate human language at an unprecedented scale. 

Machine learning is becoming an expected feature for many companies to use, and transformative AI/ML use cases are occurring across healthcare, financial services, telecommunications, government, and other industries.

Explore AI/ML use cases

Because machine learning models learn from historical data, they can learn bias and discrimination that informs human decision-making implicit in the data. For example, data can reflect existing racial, gender-based, or socioeconomic biases in society. If training data is not scrubbed for bias, the model can perpetuate and amplify those biases.

Likewise, decisions made by machine learning models, such as loan approvals, hiring, or criminal sentencing, can disproportionately affect marginalized groups. Fairness frameworks exist to ensure equitable outcomes across different groups.

Machine learning models can be seen as "black boxes" because their internal processes are not visible or understood. When a lack of transparency makes it difficult for humans to understand how a model makes a decision, this can lead to a lack of trust.

When a machine learning system makes a wrong decision, like one based on bias or discrimination, accountability can be difficult to determine. Is a machine learning model’s decision the responsibility of the developer, the organization using the system, or the system itself?

Because machine learning requires vast amounts of data to train effective models, organizations are incentivized to collect and store large volumes of personal data, which raises concerns about privacy and the potential for misuse.

Additionally, storing large datasets containing personal information increases the risk of data breaches impacting individuals through identity theft, financial fraud, or reputational damage.

Red Hat provides the common foundations for your teams to build and deploy AI apps and machine learning (ML) models with transparency and control. 

Red Hat® OpenShift® AI is a platform that can train, prompt-tune, fine tune, and serve AI models for your unique use case and with your own data.

For large AI deployments, Red Hat OpenShift offers a scalable application platform suitable for AI workloads, complete with access to popular hardware accelerators.

Red Hat is also using our own Red Hat OpenShift AI tools to improve the utility of other open source software, starting with Red Hat Ansible® Lightspeed with IBM watsonx Code Assistant. Ansible Lightspeed helps developers create Ansible content more efficiently. It reads plain English entered by a user, and then it interacts with IBM watsonx foundation models to generate code recommendations for automation tasks that are then used to create Ansible Playbooks.

Additionally, Red Hat’s partner integrations open the doors to an ecosystem of trusted AI tools built to work with open source platforms.

Learn more about OpenShift AI
Hub

The official Red Hat blog

Get the latest information about our ecosystem of customers, partners, and communities.

All Red Hat product trials

Our no-cost product trials help you gain hands-on experience, prepare for a certification, or assess if a product is right for your organization.

Keep reading

Predictive AI vs generative AI

Both gen AI and predictive AI have significant differences and use cases. As AI evolves, distinguishing between these different types helps clarify their distinct capabilities.

What is agentic AI?

Agentic AI is a software system designed to interact with data and tools in a way that requires minimal human intervention.

What are Granite models?

Granite is a series of LLMs created by IBM for enterprise applications. Granite foundation models can support gen AI use cases that involve language and code.

AI/ML resources