Sign up and be the first to know

About Hugh Terry & The Digital Insurer

Hugh Terry & The Digital Insurer Video

Contact Us

1 Scotts Road
#24-10 Shaw Centre
Singapore 228208

Write an article

Get in touch with the editor Martin Kornacki

email your ideas at martin.kornacki@the-digital-insurer.com

Pre Registration Popup

itcasia2020 Registration Popup

Share Popup

Prime Member: Find out more

Access a unique programme!
  • 56 pre recorded lesson of online content from industry experts over 7 courses
  • The best in digital insurance for practitioners and by practtioners
  • Online MCQ after each lesson
  • Join the discussion forum and make new friends
  • Certificate upon completion to show your expertise and comitment
  • 3 months to complete
  • Normal price US$1,400 Your Prime member price is US$999
  • Access to future versions included in your Prime membership!
Become a member

Prime Member: Contact Us

REach out to us. Please fill up the form below
  • Let us know how we can help. You can expect a response within 24 hours

What can machine learning do? Workforce implications.

[ff_author_box_style2]

Article Synopsis :

This article from MIT researchers Erik Brynjolfsson and Tom Mitchell, published in the journal Science, examines the ‘future of work’ in terms of what Machine Learning (ML) actually can – and cannot – do.

 The Digital Insurer reviews Science’s Report on What can machine learning do? Workforce implications.

Specific inputs and outputs – and lots of data – are required to create value with today’s machine learning technologies    

Though the paper contains a compelling policy-level discussion on the future impacts of ML on society, we’ll stick to the more practical points regarding ML implementation in the enterprise.

Though promising, there are limits to ML. To produce a well-defined learning task to which one can apply an ML algorithm, one must fully specify the task, performance metric, and training experience. Obtaining ground-truth training data can be difficult in many domains, such as psychiatric diagnosis, hiring decisions, and legal cases.

Key steps in a successful commercial application typically include efforts to:

  • Identify precisely the function to be learned
  • Collect and cleanse data to render it usable for training the ML algorithm
  • Engineer data features to choose which are likely to be helpful in predicting the target output, and perhaps to collect new data to make up for shortfalls in the original features collected
  • Experiment with different algorithms and parameter settings to optimize the accuracy of learned classifiers
  • Embed the resulting learned system into routine business operations in a way that improves productivity and, if possible, in a way that captures additional training examples on an ongoing basis

What tasks are suitable for ML and which are not? Eight criteria:

  1. Learning a function that maps well-defined inputs to well-defined outputs: these include classification (e.g., labeling images of dog breeds or labeling medical records according to the likelihood of cancer) and prediction (e.g., analyzing a loan application to predict the likelihood of future default).
  2. Large (digital) data sets exist or can be created containing input-output pairs.
  3. The task provides clear feedback with clearly definable goals and metrics. ML works well when we can clearly describe the goals, even if we cannot necessarily define the best process for achieving those goals.
  4. No long chains of logic or reasoning that depend on diverse background knowledge or common sense. ML systems are very strong at learning empirical associations in data but are less effective when the task requires long chains of reasoning or complex planning that rely on common sense or background knowledge unknown to the computer.
  5. No need for detailed explanation of how the decision was made. Large neural nets learn to make decisions by subtly adjusting up to hundreds of millions of numerical weights that interconnect their artificial neurons. Explaining the reasoning for such decisions to humans can be difficult because machines often do not make use of the same intermediate abstractions that humans do.
  6. A tolerance for error and no need for provably correct or optimal solutions. Nearly all ML algorithms derive their solutions statistically and probabilistically. As a result, it is rarely possible to train them to 100% accuracy.
  7. The phenomenon or function being learned should not change rapidly over time. In general, ML algorithms work well only when the distribution of future test examples is similar to the distribution of training examples.
  8. No specialized dexterity, physical skills, or mobility required. Robots are still quite clumsy compared with humans when dealing with physical manipulation in unstructured environments and tasks.

It was once thought all tasks requiring emotional intelligence were beyond the reach of ML systems. But this is changing. Some aspects of sales and customer interaction are potentially a very good fit. For instance, transcripts from large sets of online chats between salespeople and potential customers can be used as training data for a simple chatbot that recognizes which answers to certain common queries are most likely to lead to sales. Companies are also using ML to identify subtle emotions from videos of people.

Another area of ML evolution is in tasks involving creativity. In the old computing paradigm, each step of a process needed to be specified in advance with great precision. There was no room for the machine to be “creative” or figure out on its own how to solve a particular problem. But ML systems are specifically designed to allow the machine to figure out solutions on its own (at least for suitable ML tasks).

For example, designing a complex new device has historically been a task where humans are more capable than machines. But generative design software can come up with new designs for objects such as a heat exchanger that meet all the requirements (e.g., weight, strength, and cooling rate) more effectively than anything designed by a human, and with a very different look and feel. Is this “creative”? That depends on what definition one uses. But some “creative” tasks that were previously reserved for humans will be increasingly automatable in the coming years.

This approach works well when the final goal can be well specified and the solutions can be automatically evaluated as clearly right or wrong, or at least better or worse.

At the same time, the role of humans in more clearly defining goals will become more important, suggesting an increased role for scientists, entrepreneurs, and those making a contribution by asking the right questions, even if the machines are often better able to find the solutions to those questions once they are clearly defined.

Link to Full Article:: click here

Digital Insurer's Comments

In a nutshell:

  • AI needs masses of data and well-defined inputs and outputs to do well
  • AI can’t deliver on any task where a thorough, clear explanation is required
  • AI uses stats and probability, so it never delivers a single optimal solution
  • Machines are less adaptable than us, so their tasks can’t change suddenly

The ultimate scope and scale of further advances in ML may rival or exceed that of earlier general-purpose technologies such as the internal combustion engine, electricity, or the Internet. In our view, the time is now to explore specific ML use cases, and invest in downstream skills, resources, and infrastructure, as ML’s potential for value creation in insurance specifically seems massive.

Link to Source:: click here

Comments

';

Livefest 2019 Register Popup Event

Livefest 2019 Already Registered Popup Event

Livefest 2019 Join Live Logged-in Not Registered

Livefest 2019 Join Live Not Logged-in