There has been an upsurge in machine learning methods in recent years. Growing evidence suggests that machine learning is what a lot of people do with the big data they have accumulated.
Like any complex undertaking, it is worthwhile to break it down into component parts. That is the objective of this episode of the Talking Data podcast, in which TechTarget reporters Jack Vaughan and Ed Burns discuss the evolution of machine learning through the lens of technologies employed and end-use applications.
Among use cases cited are risk estimation in insurance, credit scoring and digital ad placement. The wide span in applications can, in turn, lead to a broad variety of systems, as different amounts and types of data feed a variety of machine learning algorithms that iteratively predict likely outcomes and test their output against known good results that have already been recorded.
The podcast briefly covers machine learning roots in statistics. In fact, such bread-and-butter methods as Naïve Bayesian filters go back at least as far as the 1950s, when translating such functions to computer language was a labor-intensive task. These days, the software and data infrastructure available for applying that filter have changed a lot.
In the 1990s, machine learning methods began to appear in the discipline known as data mining that became part of applications running on major relational databases and data warehouse systems. Presently, tools like H20, Spark and Mahout come into play, running on distributed data infrastructure. Also discussed is Amazon Machine Learning software, which enables its users to launch test applications in the cloud without configuring hardware or software infrastructure.
Check out this episode of the Talking Data podcast and learn more about machine learning methods that stand poised to change the course of data analytics.
Learn what leading luminary Yann LeCun sees upcoming in machine learning
Read notes on the passing of AI great Marvin Minsky