Talking Machines

By Tote Bag Productions

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store.


Category: Tech News

Open in iTunes


Open RSS feed


Open Website


Rate for this podcast


Description

Talking Machines is your window into the world of machine learning. Your hosts, Katherine Gorman and Neil Lawrence, bring you clear conversations with experts in the field, insightful discussions of industry news, and useful answers to your questions. Machine learning is changing the questions we can ask of the world around us, here we explore how to ask the best questions and what to do with the answers.

Episode Date
The Possibility Of Explanation and The End of Season Four
00:18:12

For the end of season four we take a break from our regular format and bring you a talk from Professor Finale Doshi Velez of Harvard University on the possibility of explanation Tune in next season!

Nov 29, 2018
Neural Information Processing Systems and Distributed Internal Intelligence Systems
00:36:36

In episode twenty one of season four we talk about distributed intelligence systems (mainly those internal to humans), talk about what were excited to see at the Conference on Neural Information Processing Systems and in advance of our trek to Canada we chat with Garth Gibson president and CEO of the Vector Institute.

Nov 16, 2018
Data Driven Ideas and Actionable Privacy
00:45:19

In episode twenty of season four we talk about the importance of crediting your data, answer a listener question about internships vs salaried positions and talk with Matt Kusner of the Alan Turing institute the UK’s national institute for data science and AI.

Nov 01, 2018
AI for Good and The Real World
00:32:34

In episode nineteen of season four we talk about causality in the real world, take a question about being surprised by the elephant in the room and talk with Kush Varshney of IBM.


Oct 18, 2018
Systems Design and Tools for Transparency
00:40:20

In episode 18 of season four we talk about systems design, (remember the 3 d's!), tools for transparency and fairness and we talk with Adria Gascon of The Alan Turing Institute, the UK’s national institute for data science and AI.

Oct 05, 2018
How to Research in Hype and CIFAR's Strategy
00:37:07

In episode 17 of season four we talk about how to research in a time of hype (and other lessons from Tom Griffiths book) Neil's love of variational methods, and with Chat with Elissa Strome director of the Pan-Canadian AI Strategy for CIFAR

Sep 20, 2018
Troubling Trends and Climbing Mountains
00:39:32

In this episode we talk about an article Troubling Trends in Machine learning Scholarship the difference between engineering and science (and the mountains you climb to span the distance) plus we talk with David Duvenaud of the University of Toronto

Sep 07, 2018
Gaussian Processes, Grad School, and Richard Zemel
00:43:43
Aug 23, 2018
Long Term Fairness
00:29:25
Aug 09, 2018
Simulated Learning and Real World Ethics
00:57:32

In episode thirteen of season four we chat about simulations, reinforcement learning, and Philippa Foot. We take a listener question about the update to the ACM code of ethics (first time since 1992!) and We talk with professor Mike Jordan.

Jul 27, 2018
ICML 2018 with Jennifer Dy
00:19:54

Season four episode twelve finds us at ICML! We bring you a special episode with Jennifer Dy, co-program chair of the conference.

Jul 12, 2018
Aspirational Asimov and How to Survive a Conference
00:45:02

In season four episode eleven we talk about the possibility of the NIPS conference changing its name, what to do at ICML, And we talk with Bernhard Schölkopf.


Jun 28, 2018
Explanations and Reviews
00:23:35
Jun 14, 2018
Statements on Statements
00:26:47

In episode 9 of season 4 we talk about the Statement on Nature Machine Intelligence. We reached out to Nature for a statement on the statement and received the following:

“At Springer Nature we are very clear in our mission to advance discovery and help researchers share their work. Having an extensive, and growing, open access portfolio is one important way we do this but it is important to remember that while open access has been around for 20 years now it still only accounts for a small percentage of overall global research output with demand for subscription content remaining high. This is because the move to open access is complex, and for many, simply not a viable option.

Nature Machine Intelligence is a new subscription journal that aims to stimulate cross-disciplinary interactions, reach broad audiences and explore the impact that AI research has on other fields by publishing high-quality research, reviews and commentary on machine learning, robotics and AI. It involves substantial editorial development, offers high levels of author service and publishes informative, accessible content beyond primary research all of which requires considerable investment. At present, we believe that the fairest way of producing highly selective journals like this one and ensuring their long-term sustainability as a resource for the widest possible community, is to spread these costs among many readers — instead of having them borne by a few authors.   

 We also offer multiple open access options for AI authors. We already publish AI papers in Scientific Reports and Nature Communications, which are the largest open access journal in the world and the most cited open access journal respectively. We offer hybrid publishing options and are set to launch a new AI multidisciplinary, open access journal later this year.

We help all researchers to freely share their discoveries by encouraging preprint posting and data- and code-sharing and continue to extend access to all Nature journals in various ways, including our free SharedIt content-sharing initiative, which provides authors and subscribers with shareable links to view-only versions of published papers.”

We also get a chance to talk with Maithra Raghu from the Google Brain team about her work.

May 31, 2018
The Futility of Artificial Carpenters and Further Reading
00:37:18

In episode eight of season four we review some recently published articles by Michael Jordan and Rodney Brooks (for more reading along these lines, Tom Dettriech is a great person to follow), we recommend some further reading, and talk with Arthur Gretton who was part of the team behind one of the Best Papers at NIPS 2017

For more reading we recommend Machine Learning Yearning, Talking Nets, The Mechanical Mind in History, and Colossus.

May 17, 2018
Economies, Work and AI
00:42:40

In episode seven of season four we chat about Ellis and the UK AI Sector Deal , we take a listener question about the next AI winter and if/when it is coming, plus we hear from Christina Colclough Director of Platform and Agency Workers, Digitalization and Trade UNI Global Union.

May 03, 2018
Explainability and the Inexplicable
00:43:57

In episode six of season four we chat about AI and religion, we take a listener question about personal bias checking and we hear from Been Kim of Google Brain.

Apr 19, 2018
Good Data Practice Rules
00:51:35

In episode five of season four we talk about the GDPR or as we like to think of it Good Data Practice Rules. (If you actually read it, you move to expert level!) We take a listener question about the power of approximate inference, and we hear from our guest Andrew Blake of The Alan Turing Institute.

Apr 05, 2018
Can an AI Practitioner Fix a Radio?
00:44:17

In episode four of season four we talk more about natural an artificial intelligences and thinking about diversity in systems. Reading Can a Biologist Fix a Radio is a great paper around these ideas. We take a listener question about moving into machine learning after having advanced training in a different program. Our guest on this episode is our second second time guest Peter Donnelly, Professor of Statistical Science at the University of Oxford, Director of the Wellcome Trust Center for Human Genetics and a Fellow of the Royal Society

Mar 22, 2018
Natural vs Artificial Intelligence and Doing Unexpected Work
00:58:28

In season four episode three of Talking Machines we chat about Neil’s recent thinking (definitely not work) on the core differences between natural intelligence and machine intelligence, he recently wrote blog post on the subject and in the fall of 2017 he gave a TedX talk about the topic. We also take a listener question about what maths you should take to get into building ML tools. Our guests this week are Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology at Rice University and Margaret Levi Director of the Center for Advanced Study in the Behavioral Sciences(CASBS) at Stanford and Professor of Political Science, Stanford University, and Jere L. Bacharach Professor Emerita of International Studies in the Department of Political Science at the University of Washington. They co-organized a symposium put on by the American Academy of Arts and Sciences and the Royal Society about the future of work. We got a chance to speak to both of them about their work and the event.

Mar 08, 2018
Scientific Rigor and Turning Information into Action
00:38:20

In episode two of season four we're proud to bring you the second annual "Hosts of Talking Machine's Episode"! Ryan and Neil chat about Ali Rahimi's speech at NIPS-17, Kate Crawford's talk The Trouble with Bias, and much more.

We also get to hear a conversation with Ciira wa Maina, lecturer in the Department of Electrical and Electronic Engineering Dedan Kimathi University of Technology in Nyeri Kenya

Feb 22, 2018
Code Review for Community Change
00:35:17

On this episode of Talking Machines we take a break from our regular format to talk about the “code review of community culture” that the AI, ML, Stats and Computer Science fields in general need to undergo. 

In a blog post, that was put up shortly after NIPS, researcher Kristian Lum outlined several instances of sexual harassment and abuse of power. In her post she mentioned Brad Carlin and a person who she referred to as S. We learned in reporting done by Bloomberg that S was Steven Scott, who was at Google. 

As of this posing Carlin is under investigation and Scott has left Google after being suspended

Today we pause in our regular format to talk about how we, as a community, can change. 

Full disclosure: Neil and Katherine served as press chairs for NIPS 2017. They will hold the same post for ICML 2018 and NIPS 2018 and are working along with the other organizers of these events to effect change around these issues.

Feb 08, 2018
The Pace of Change and The Public View of ML
00:40:12

In episode ten of season three we talk about the rate of change (prompted by Tim Harford), take a listener question about the power of kernels, and talk with Peter Donnelly in his capacity with the Royal Society's Machine Learning Working Group about the work they've done on the public's views on AI and ML.

Oct 05, 2017
The Long View and Learning in Person
01:05:50

In episode nine of season three we chat about the difference between models and algorithms, take a listener question about summer schools and learning in person as opposed to learning digitally, and we chat with John Quinn of the United Nations Global Pulse lab in Kampala, Uganda and Makerere University's Artificial Intelligence Research group.

Sep 21, 2017
Machine Learning in the Field and Bayesian Baked Goods
00:59:39

In episode eight of season three we return to the epic (or maybe not so epic) clash between frequentists and bayesians, take a listener question about the ethical questions generators of machine learning should be asking of themselves (not just their tools) and we hear a conversation with Ernest Mwebaze of Makerere University.

Sep 08, 2017
Data Science Africa with Dina Machuve
00:48:13

In episode seven of season three we take a minute to break way from our regular format and feature a conversation with Dina Machuve of the Nelson Mandela African Institute of Science and Technology we cover everything from her work to how cell phone access has changed data patterns. We got to talk with her at the Data Science Africa confrence and workshop.



Aug 10, 2017
The Church of Bayes and Collecting Data
00:49:36

In episode six of season three we chat about the difference between frequentists and Bayesians, take a listener question about techniques for panel data, and have an interview with Katherine Heller of Duke


Jul 28, 2017
Getting a Start in ML and Applied AI at Facebook
00:57:47

In episode five of season three we compare and contrast AI and data science, take a listener question about getting started in machine learning, and listen to an interview with Joaquin Quiñonero Candela.

For a great place to get started with foundational ideas in ML, take a look at Andrew Ng’s course on Coursera. Then check out Daphne Kohler’s course.


Talking Machines is now working with Midroll to source and organize sponsors for our show. In order find sponsors who are a good fit for us, and of worth to you, we’re surveying our listeners.

If you’d like to help us get a better idea of who makes up the Talking Machines community take the survey at http://podsurvey.com/MACHINES.

Jul 13, 2017
Bias Variance Dilemma for Humans and the Arm Farm
00:50:10

In episode four of season three Neil introduces us to the ideas behind the bias variance dilemma (and how how we can think about it in our daily lives). Plus, we answer a listener question about how to make sure your neural networks don't get fooled. Our guest for this episode is Jeff Dean,  Google Senior Fellow in the Research Group, where he leads the Google Brain project. We talk about a closet full of robot arms (the arm farm!), image recognition for diabetic retinopathy, and equality in data and the community.

 

Fun Fact: Geoff Hinton’s distant relative invented the word tesseract. (How cool is that. Seriously.)

Jun 29, 2017
Overfitting and Asking Ecological Questions with ML
00:41:29
In this episode three of season three of Talking Machines we dive into overfitting, take a listener question about unbalanced data and talk with Professor (Emeritus) Tom Dietterich from Oregon State University.
Jun 15, 2017
Graphons and "Inferencing"
00:41:41
In episode two of season three Neil takes us through the basics on dropout, we chat about the definition of inference (It's more about context than you think!) and hear an interview with Jennifer Chayes of Microsoft. 
May 25, 2017
Hosts of Talking Machines: Neil Lawrence and Ryan Adams
00:33:36
Talking Machines is entering its third season and going through some changes. Our founding host Ryan is moving on and in his place Neil Lawrence of Amazon is taking over as co host. We say thank you and good bye to Ryan with an interview about his work. 
Apr 27, 2017
ANGLICAN and Probabilistic Programming
00:44:13
In episode seventeen of season two we get an introduction to Min Hashing, talk with Frank Wood the creator of ANGLICAN, about probabilistic programming and his new company, INVREA, and take a listener question about how to choose an architecture when using a neural network.
Sep 01, 2016
Eric Lander and Restricted Boltzmann Machines
00:53:57
In episode sixteen of season two, we get an introduction to Restricted Boltzmann Machines, we take a listener question about tuning hyperparameters,  plus we talk with Eric Lander of the Broad Institute.
Aug 18, 2016
Generative Art and Hamiltonian Monte Carlo
00:47:02
In episode fifteen of season two, we talk about Hamiltonian Monte Carlo, we take a listener question about unbalanced data, plus we talk with Doug Eck of Google’s Magenta project.  
Aug 04, 2016
Perturb-and-MAP and Machine Learning in the Flint Water Crisis
00:38:26
In episode fourteen of season two, we talk about Perturb-and-MAP, we take a listener question about classic artificial intelligence ideas being used in modern machine learning, plus we talk with Jake Abernethy of the University of Michigan about municipal data and his work on the Flint water crisis.
Jul 21, 2016
Automatic Translation and t-SNE
00:32:01
In episode thirteen of season two, we talk about t-Distributed Stochastic Neighbor Embedding (t-SNE) we take a listener question about statistical physics, plus we talk with Hal Daume of the University of Maryland. (who is a great follow on Twitter.)
Jul 07, 2016
Fantasizing Cats and Data Numbers
00:49:13
In episode twelve of season two, we talk about generative adversarial networks, we take a listener question about using machine learning to improve or create products, plus we talk with Iain Murray of the University of Edinburgh.
Jun 16, 2016
Spark and ICML
00:39:01
In episode eleven of season two, we talk about the machine learning toolkit  Spark, we take a listener question about the differences between NIPS and ICML conferences, plus we talk with Sinead Williamson of The University of Texas at Austin.
Jun 02, 2016
Computational Learning Theory and Machine Learning for Understanding Cells
00:40:47
In episode ten of season two, we talk about Computational Learning Theory and Probably Approximately Correct Learning originated by Professor Leslie Valiant of SEAS at Harvard, we take a listener question about generative systems, plus we talk with Aviv Regev, Chair of the Faculty and Director of the Klarman Cell Observatory and the Cell Circuits Program at the Broad Institute.
May 19, 2016
Sparse Coding and MADBITS
00:41:25
In episode nine of season two, we talk about sparse coding, take a listener question about the next big demonstration for AI after AlphaGo. Plus we talk with Clement Farabet about MADBITS and the work he’s doing at Twitter Cortex.
May 05, 2016
Remembering David MacKay
00:53:15
Recently Professor David MacKay passed away. We’ll spend this episode talking about his extensive body of work and its impacts. We’ll also talk with Philipp Hennig, a research group leader at the Max Planck Institute for Intelligent Systems, who trained in Professor MacKay’s group (with Ryan).
Apr 21, 2016
Machine Learning and Society
00:48:27
Episode seven of season two is a little different than our usual episodes, Ryan and Katherine just returned from a conference where they got to talk with Neil Lawrence of the University of Sheffield about some of the larger issues surrounding machine learning and society. They discuss anthropomorphic intelligence, data ownership, and the ability to empathize. The entire episode is given over to this conversation in hopes that it will spur more discussion of these important issues as the field continues to grow. 
Apr 08, 2016
Software and Statistics for Machine Learning
00:39:07
In episode six of season two, we talk about how to build software for machine learning (and what the roadblocks are), we take a listener question about how to start exploring a new dataset, plus, we talk with Rob Tibshirani of Stanford University.
Mar 24, 2016
Machine Learning in Healthcare and The AlphaGo Matches
00:48:31
In episode five of Season two Ryan walks us through variational inference, we put some listener questions about Go and how to play it to Andy Okun, president of the American Go Association (who is in Seoul South Korea watching the Lee Sedol/AlphaGo games). Plus we hear from Suchi Saria of Johns Hopkins about applying machine learning to understanding health care data.
Mar 10, 2016
AI Safety and The Legacy of Bletchley Park
00:48:55
In episode four of season two, we talk about some of the major issues in AI safety, (and how they’re not really that different from the questions we ask whenever we create a new tool.) One place you can go for other opinions on AI safety is the Future of Life Institute. We take a listener question about time series and we talk with Nick Patterson of the Broad Institute about everything from ancient DNA to Alan Turing. If you're as excited about AlphaGo playing Lee Sedol at Nick is, you can get details on the match on DeepMind's You Tube channel March 5th through the 15th.
Feb 25, 2016
Robotics and Machine Learning Music Videos
00:40:07
In episode three of season two Ryan walks us through the Alpha Go results and takes a lister question about using Gaussian processes for classifications. Plus we talk with Michael Littman of Brown University about his work, robots, and making music videos. Also not to be missed, Michael’s appearance in the recent Turbotax ad!
Feb 11, 2016
OpenAI and Gaussian Processes
00:35:29
In episode two of season two Ryan introduces us to Gaussian processes, we take a listener question on K-means. Plus, we talk with Ilya Sutskever the director of research for OpenAI. (For more from Ilya, you can listen to our season one interview with him.)
Jan 28, 2016
Real Human Actions and Women in Machine Learning
00:59:31
In episode one of season two, we celebrate the 10th anniversary of Women in Machine Learning (WiML) with its co-founder (and our guest host for this episode) Hanna Wallach of Microsoft Research. Hanna and Jenn Wortman Vaughan, who also helped to found the event, tell us how about how the 2015 event went. Lillian Lee (Cornell), Raia Hadsell (Google Deepmind), Been Kim (AI2/University of Washington), and Corinna Cortes (Google Research) gave invited talks at the 2015 event. WiML also released a directory of women in machine learning, if you’d like to listed, want to find a collaborator, or are looking for an expert to take part in an event, it’s an excellent resource. Plus, we talk with Jenn Wortman Vaughan, about the research she is doing at Microsoft Research which examines the assumptions we make about how humans actually act and using that to inform thinking about our interactions with computers.  Want to learn more about the talks at WiML 2015? Here are the slides from each speaker. Lillian LeeCorinna CortesRaia Hadsell Been Kim
Jan 14, 2016
Open Source Releases and The End of Season One
00:40:40
In episode twenty four we talk with Ben Vigoda about his work in probabilistic programming (everything from his thesis, to his new company) Ryan talks about Tensor Flow and Autograd for Torch, some open source tools that have been recently releases. Plus we talk a listener question about the biggest thing in machine learning this year. This is the last episode in season one. We want to thanks all our wonderful listeners for supporting the show, asking us questions, and making season two possible! We’ll be back in early January with the beginning of season two!
Nov 22, 2015
Probabilistic Programming and Digital Humanities
00:48:12
In episode 23 we talk with David Mimno of Cornell University about his work in the digital humanities (and explore what machine learning can tell us about lady zombie ghosts and huge bodies of literature) Ryan introduces us to probabilistic programming and we take a listener question about knowledge transfer between math and machine learning.
Nov 05, 2015
Workshops at NIPS and Crowdsourcing in Machine Learning
00:47:45
In episode twenty two we talk with Adam Kalai of Microsoft Research New England about his work using crowdsourcing in Machine Learning, the language made of shapes of words, and New England Machine Learning Day. We take a look at the workshops being presented at NIPS this year, and we take a listener question about changing the number of features your data has.
Oct 22, 2015
Machine Learning Mastery and Cancer Clusters
00:26:44
In episode twenty one  we talk with Quaid Morris of the University of Toronto, who is using machine learning to find a better way to treat cancers. Ryan introduces us to expectation maximization and we take a listener question about how to master machine learning.
Oct 08, 2015
Data from Video Games and The Master Algorithm
00:46:17
In episode 20 we chat with Pedro Domingos of the University of Washington, he's just published a book The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. We get some insight into Linear Dynamical Systems which the Datta Lab at Harvard Medical School is doing some interesting work with. Plus, we take a listener question about using video games to generate labeled data (spoiler alert, it's an awesome idea!)We're in the final hours of our Fundraising Campaign and we need your help! 
Sep 24, 2015
Strong AI and Autoencoders
00:36:03
In episode nineteen we chat with Hugo Larochelle about his work on unsupervised learning, the International Conference on Learning Representations (ICLR), and his teaching style. His Youtube courses are not to be missed, and his twitter feed @Hugo_Larochelle is a great source for paper reviews. Ryan introduces us to autoencoders (for more, turn to the work of Richard Zemel) plus we tackle the question of what is standing in the way of strong AI. Talking Machines is beginning development of season two! We need your help! Donate now on Kickstarter. 
Sep 10, 2015
Active Learning and Machine Learning in Neuroscience
00:53:49
In episode eighteen we talk with Sham Kakade, of Microsoft Research New England, about his expansive work which touches on everything from neuroscience to theoretical machine learning. Ryan introduces us to active learning (great tutorial here) and we take a question on evolutionary algorithms. Today we're announcing that season two of Talking Machines is moving into development, but we need your help! In order to raise funds, we've opened the show up to sponsorship and started a Kickstarter and we've got some great nerd cred prizes to thank you with. But more than just getting you a totally sweet mug your donation will fuel journalism about the reality of scientific research, something that is unfortunately hard to find. Lend a hand if you can!  
Aug 27, 2015
Machine Learning in Biology and Getting into Grad School
00:48:26
In episode seventeen we talk with Jennifer Listgarten of  Microsoft Research New England about her work using machine learning to answer questions in biology. Recently, With her collaborator Nicolo Fusi, she used machine learning to make CRISPR more efficient and correct for latent population structure in GWAS studies. We take a question from a listener about the development of computational biology and Ryan gives us some great advice on how to get into grad school (Spoiler alert: apply to the lab, not the program.)
Aug 13, 2015
Machine Learning for Sports and Real Time Predictions
00:29:08
In episode sixteen we chat with Danny Tarlow of Microsoft Research Cambridge (in the UK not MA). Danny (along with Chris Maddison and Tom Minka) won best paper at NIPS 2014 for his paper A* Sampling. We talk with him about his work in applying machine learning to sports and politics. Plus we take a listener question on making real time predictions using machine learning, and we demystify backpropagation. You can use Torch, Theano or Autograd to explore backprop more.
Jul 30, 2015
Really Really Big Data and Machine Learning in Business
00:23:46
In episode fifteen we talk with Max Welling, of the University of Amsterdam and University of California Irvine. We talk with him about his work with extremely large data and big business and machine learning. Max was program co-chair for NIPS in 2013 when Mark Zuckerberg visited the conference, an event which Max wrote very thoughtfully about. We also take a listener question about the relationship between machine learning and artificial intelligence. Plus, we get an introduction to change point detection. For more on change point detection check out the work of Paul Fearnhead of Lancaster University. Ryan also has a paper on the topic from way back when.
Jul 16, 2015
Solving Intelligence and Machine Learning Fundamentals
00:30:11
In episode fourteen we talk with Nando de Freitas. He’s a professor of Computer Science at the University of Oxford and a senior staff research scientist Google DeepMind. Right now he’s focusing on solving intelligence. (No biggie) Ryan introduces us to anchor words and how they can help us expand our ability to explore topic models. Plus, we take a question about the fundamentals of tackling a problem with machine learning.
Jul 02, 2015
Working With Data and Machine Learning in Advertising
00:39:11
In episode thirteen we talk with Claudia Perlich, Chief Scientist at Dstillery. We talk about her work using machine learning in digital advertising and her approach to data in competitions. We take a look at information leakage in competitions after ImageNet Challenge this year. The New York Times covered the events, and Neil Lawrence has been writing thoughtfully about it and its impact. Plus, we take a listener question about trends in data size.
Jun 18, 2015
The Economic Impact of Machine Learning and Using The Kernel Trick on Big Data
00:40:36
In episode twelve we talk with Andrew Ng, Chief Scientist at Baidu, about how speech recognition is going to explode the way we use mobile devices and his approach to working on the problem. We also discuss why we need to prepare for the economic impacts of machine learning. We’re introduced to Random Features for Large-Scale Kernel Machines, and talk about how using this twist on the Kernel trick can help you dig into big data. Plus, we take a listener question about the size of computing power in machine learning.
Jun 04, 2015
How We Think About Privacy and Finding Features in Black Boxes
00:33:43
In episode eleven we chat with Neil Lawrence from the University of Sheffield. We talk about the problems of privacy in the age of machine learning, the responsibilities that come with using ML tools and making data more open. We learn about the Markov decision process (and what happens when you use it in the real world and it becomes a partially observable Markov decision process) and take a listener question about finding insights into features in the black boxes of deep learning.
May 21, 2015
Interdisciplinary Data and Helping Humans Be Creative
00:34:17
In Episode 10 we talk with David Blei of Columbia University. We talk about his work on latent dirichlet allocation, topic models, the PhD program in data that he’s helping to create at Columbia and why exploring data is inherently multidisciplinary. We learn about Markov Chain Monte Carlo and take a listener question about how machine learning can make humans more creative.
May 07, 2015
Starting Simple and Machine Learning in Meds
00:38:24
In episode nine we talk with George Dahl, of  the University of Toronto, about his work on the Merck molecular activity challenge on kaggle and speech recognition. George recently successfully defended his thesis at the end of March 2015. (Congrats George!) We learn about how networks and graphs can help us understand latent properties of relationships, and we take a listener question about just how you find the right algorithm to solve a problem (Spoiler: start simple.)  
Apr 23, 2015
Spinning Programming Plates and Creative Algorithms
00:35:18
On episode eight we talk with Charles Sutton, a professor in the School of Informatics University of Edinburgh about computer programming and using machine learning how to better understand how it’s done well. Ryan introduces us to collaborative filtering, a process that helps to make predictions about taste. Netflix and Amazon use it to recommend movies and items. It's the process that the Netflix Prize competition further helped to hone. Plus, we take a listener question on creativity in algorithms.
Apr 09, 2015
The Automatic Statistician and Electrified Meat
00:45:40
In episode seven of Talking Machines we talk with Zoubin Ghahramani, professor of Information Engineering in the Department of Engineering at the University of Cambridge. His project, The Automatic Statistician, aims to use machine learning to take raw data and give you statistical reports and natural languages summaries of what trends that data shows. We get really hungry exploring Bayesian Non-parametrics through the stories of the Chinese Restaurant Process and the Indian Buffet Process (but remember, there’s no free lunch). Plus we take a listener question about how much we should rely on ourselves and our ideas about what intelligence in electrified meat looks like when we try to build machine intelligences.
Mar 26, 2015
The Future of Machine Learning from the Inside Out
00:28:14
We hear the second part of our conversation with with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). They talk with us about this history (and future) of research on neural nets. We explore how to use Determinantal Point Processes. Alex Kulesza  and Ben Taskar (who passed away recently) have done some really exciting work in this area, for more on DPPs check out their paper on the topic. Also, we take a listener question about machine learning and function approximation (spoiler alert: it is, and then again, it isn’t).
Mar 13, 2015
The History of Machine Learning from the Inside Out
00:32:36
In episode five of Talking Machines, we hear the first part of our conversation with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). Ryan introduces us to the ideas in tensor factorization methods for learning latent variable models (which is both a tongue twister and and one of the new tools in ML). To find out more on the topic, the paper Tensor decompositions for learning latent variable models is a good place to start. You can also take a look at the work of Daniel Hsu, Animashree Anandkumar and Sham M. Kakade Plus we take a listener question about just where statistics stops and machine learning begins.  
Feb 26, 2015
Using Models in the Wild and Women in Machine Learning
00:45:06
In episode four we talk with Hanna Wallach, of Microsoft Research. She's also a professor in the Department of Computer Science, University of Massachusetts Amherst and one of the founders of Women in Machine Learning (better known as WiML). We take a listener question about scalability and the size of data sets. And Ryan takes us through topic modeling using Latent Dirichlet allocation (say that five times fast). 
Feb 12, 2015
Common Sense Problems and Learning about Machine Learning
00:40:55
On episode three of Talking Machines we sit down with Kevin Murphy who is currently a research scientist at Google. We talk with him about the work he’s doing there on the Knowledge Vault, his textbook, Machine Learning: A Probabilistic Perspective (and its arch nemesis which we won’t link to), and how to learn about machine learning (Metacademy is a great place to start). We tackle a listener question about the dream of a one step solution to strong Artificial Intelligence and if Deep Neural Networks might be it. Plus, Ryan introduces us to a new way of thinking about questions in machine learning from Yoshua Bengio’s Lab at the University of Montreal out lined in their new paper, Identifying and attacking the saddle point problem in high-dimensional non-convex optimization, and Katherine brings up Facebook’s release of open source machine learning tools and we talk about what it might mean. If you want to explore some open source tools for machine learning we also recommend giving these a try:Super big list of ML Open Source Projects! Torch Gaussian Process Machine Learning ToolboxPyMCMalletStanWekaTheanoCaffeSpearmint
Jan 29, 2015
Machine Learning and Magical Thinking
00:35:10
Today on Talking Machines we hear from Google researcher Ilya Sutskever about his work, how he became interested in machine learning, and why it takes a little bit of magical thinking. We take your questions, and explore where the line between human programming and computer learning actually is. And we sift through some news from the field, Ryan explains the concepts behind one of the best papers  at NIPS this year, A * Sampling, and Katherine brings up an open letter about research priorities and ethical questions that was recently published.
Jan 15, 2015
Hello World!
00:41:28
In the first episode of Talking Machines we meet our hosts, Katherine Gorman (nerd, journalist) and Ryan Adams (nerd, Harvard computer science professor), and explore some of the interviews you'll be able to hear this season. Today we hear some short clips on big issues, we'll get technical, but today is all about introductions.We start with Kevin Murphy of Google talking about his textbook that has become a standard in the field. Then we turn to Hanna Wallach of Microsoft Research NYC and UMass Amherst and hear about the founding of WiML (Women in Machine Learning). Next we discuss academia's relationship with business with Max Welling from the University of Amsterdam, program co-chair of  the 2013 NIPS conference (Neural Information Processing Systems). Finally, we sit down with three pillars of the field Yann LeCun, Yoshua Bengio, and Geoff Hinton to hear about where the field has been and where it might be headed. 
Jan 01, 2015