The AI Summit in San Francisco

Bart Teeuwen
14 min readFeb 22, 2020

Seeing the impact of AI for Business

I recently attended the AI Summit in San Francisco that explores what AI means for enterprises, as well as how to best prepare for an AI-powered future. The event attracted over 4,500 attendees, 250 speakers, and 200 exhibitors to showcase the latest developments in computer vision, business AI, and more.

The conference featured great speakers from tech enterprises and venture capital who shared their stories, perspective on AI trends, insights on how to embed AI in enterprises, how to build an effective AI system foundation, and more.

Google Keynote — Embedding AI in the Enterprise

The Head of Enterprise AI from Google, Rich Dutton, gave a keynote on Googles’ blueprint of using AI in the enterprise.

Dutton said that Google has a large research department of about 4,000 people for research, technology, another 1,000 people for cloud AI, image recognition, etc.

He then went into some of the business challenges Google is tackling with use of AI, including some examples:

Supporting staff

  • Ticket routing, putting data in a bucket or que that turns it into numbers for a neural network
  • Automatic ticket resolution through knowledge bases and virtual support agents
  • Real-time ticket trend analysis (emerging clusters to see a problem)

HR

  • Suggest peer reviewers for performance management
  • Suggest courses, mentors, etc.
  • Suggest new jobs to Google employees
  • Optimize space and and forecasting

Facilities management

  • Device anomaly detection on sensor strips
  • Automate and optimize buildings
  • Cafeteria demand protection (cut down food waste)

Communications

  • Video conferencing fault detection, can take log files to feed the algorithm
  • Document classification, take embedded information to classify and extract information, done with Optical Character Recognition (OCR) and other technology
  • Document similarity and deduplication (bugs and design docs)

According to Dutton his mission is to use AI to create a competitive advantage within the enterprise by:

1. Educating people

2. Consulting and mentoring people

3. Doing research on AI

4. Implementing best practices

When the moderator asked Dutton about any considerations the audience should keep in mind when employing AI he said privacy, fairness, and interpretability are essential.

Canada Keynote — Canada’s AI Ecosystem

The Chief Scientific Officer & Co-Founder from Stradigi AI, Carolina Bessega, gave a keynote on Canada’s AI Ecosystem.

Bessega presented on the stage where she also represented Canada as she has 20 years of experience in ML/AI, is originally from Venezuela, and an entrepreneur.

She mentioned the company works with the Canadian Institute for Advanced Research (CIFAR) and CIFAR works with three key partners in their (Pan-Can) strategy to provide deep investment in Canada’s early leadership in ML research and training. These partners are:

1. Vector institute for AI

2. Mila (Montreal institute for learning algorithms)

3. Amii (Alberta ML institute)

Canada’s AI strategy

The CIFAR is important to mention as it’s a key part of Canada’s AI strategy. The AI strategy is centered around: CIFAR, providing AI research is growing with CIFAR investing CAD $125M from the Canadian government over 5 years to increase the number of AI researchers and skilled graduates

Superclusters, the five clusters in AI are Digital Tech, Proteins, Manufacturing, Supply Chain, and Ocean. These clusters are focused on Vancouver, Calgary/Edmonton, Toronto, Montreal/Ottawa, and Quebec City

Government, to position the country as an early and responsible adopter of AI as well as focusing on transparency, efficiency, and decision making

Ecosystem, Facilitate and build an ecosystem of startups, accelerators, investors, and public research

Bessega went one level deeper where she explained exactly what CIFAR’s role is and what their Pan-Can strategy entails.

CIFAR is a Canadian-based international charity organization that brings together extraordinary people to solve the most pressing questions facing science and humanity. The Canadian government appointed CIFAR in 2017 to develop, lead, and manage a CAD $125M investment in AI over 5 years under the Pan-Canadian AI strategy. (first in the world)

Pan-Canadian AI Strategy

CIFAR’s goals are to increase the number of people researching AI, interconnect the 3 major AI centers, support the national research community, and establish global leadership on economic, legal, policy, and ethical implications in AI.

Building on this strategy several Canadian provinces have pledged to invest money in the AI ecosystem in addition to the federal government of Canada. Ontario, (CAD $80M) Alberta, (CAD $100M) and Quebec (CAD $100M) together raised CAD $280M for AI innovation. This has spurred tremendous growth in the AI ecosystem:

  • 650+ startups
  • 40+ accelerators & incubators
  • 60+ investor groups
  • 60+ public research labs

Private investment bodies noticed the activity in AI in Canada, and this has led to an increase in private investments as well. Please view below a list of large technology companies that invested in AI research facilities in Canada:

All of these investments, accelerators/incubators, and AI startups have led to a shift in reported profit margin between companies that adopted AI strategies early and those who didn’t. Healthcare, financial services, retail, education, and professional services in particular reported 5+ percentage growth in profit margin.

Finally, Bessega concluded her keynote with five reasons to invest in Canada:

  1. Excellent economic fundamentals
  2. Dynamic workforce
  3. Competitive work environment
  4. Support for innovation
  5. Great quality of life

Synchrony Financial — Build AI as a learning system architected from the ground up

The Senior VP & Entrepreneur in Residence (EIR) from Synchrony Financial, Alex Muller, gave a keynote on how Synchrony Financial thinks about building AI as a learning system by design.

Muller told the audience he thinks the future or development is a self-developing product that works with probability. He emphasized that data will drive a better idea than product managers.

The moderator asked how Synchrony Financial would build a self-developing product. Muller said that we need to stop thinking about the page, the components are what matters. Having eleven decisions to take isn’t the best.

Using data to see what matters most to individual users and how AI and ML can help in this and adapt content and components to different users (e.g. page flow can be optimized)

He highlighted three foundational points:

  1. User — Find out everything there is to know about your user and motivation
  2. Activity — what is the user trying to accomplish
  3. Context — what, where, and how is the user trying to accomplish

This data will create the right foundation to drive predictions, ensure new features are appreciated, and more. The moderator added that achieving those foundational points might need a new set of workforce and asked Alex his view on new roles in AI. He spoke broadly about four roles he thinks will be important in the future:

  1. ML engineers
  2. Sentiment analysts
  3. Supervised learning staff
  4. AI Compliance officers

Muller highlighted that it’s important to distribute ML knowledge because teaching context is expensive. He emphasized that it’s okay to use tools instead of building everything yourself. He believes in the concept of a citizen data scientist.

Finally, the moderator asked Muller for some last key takeaways around AI and self-learning systems, to which he responded with three takeaways:

  1. Build data first and with a very open mind
  2. Coach your AI, set boundaries, but allow it to evolve
  3. Develop your AI and ML talent across your organization

Boston Scientific — Amazing Thing, but not everything: AI’s role in Healthcare

VP IT & Chief Digital Health Officer from Boston Scientific, David Feygin gave a keynote on how AI can transform healthcare.

Feygin started his keynote with a bombshell:

“17% of GDP on healthcare US and $1 trillion dollars is wasted in medical funds”.

“Patients now expect seamless healthcare experience”.

He said that today healthcare data is still subjective, sparse, and not cohesive. In order to curb doctor over-usage and prevent waste in medical funds David said its imperative to connect existing data and to multiple sources (in automated ways) to prevent doctor over-usage. He said that AI can be very helpful in better utilizing the time of physicians. Feygin added on this by saying

“Physicians that don’t use AI will be replaced by those who do”.

As an example of AI that can benefit healthcare is where Boston Scientific got a new solution approved by the FDA called “Heartlogic” to predict heart failure events to intervene early.

At the end of his presentation Feygin asked the audience what they thought of AI; a threat, a tool. Or our savior?

He said it’s all of the above.

KPMG Enterprise AI adoption and 8 emerging trends

Principal Data Analytics & AI from KPMG, Traci Gusher gave a keynote on enterprise AI adoption and 8 emerging trends they see in the space.

Speed is the key term Gusher gave us. She then showed the audience a graph to showcase exactly how much faster technology adoption has become:

  • 38 years to get 50m radio users
  • 3.5 years to get 50m Facebook users
  • 19 days to get 50m users on Pokémon Go

To create an impact with technology and AI in the 21st century you can’t really do that on your own anymore. Companies need to have the right mix of technology and partner with universities, technology vendors, professional services vendors, etc. to scale quicker and create impact.

Gusher mentioned KPMG has these partnerships and wealth of information from its AI research, (AI best practices) interviews with 30 senior leaders driving AI strategy, and secondary research from the top 100 companies to come up with 8 emerging trends surrounding AI:

1. Rapid shift from experimental to applied technology

According to Gusher KPMG sees three horizons in relation to technology over time and against profit.

  • Horizon 1, extend and defend core businesses
  • Horizon 2, Build emerging businesses
  • Horizon 3, Create viable options

According to KPMG only 17% of companies used AI at enterprise scale, much more did it only on departmental scale. They planned on tripling funding.

Key takeaway: Technology is now ready to provide true value today according to KPMG, which would refer to the first horizon.

2. Automation, AI, Analytics and low code platforms are converging

Employing these technologies in tandem is key according to KPMG as they are more effective together. Benefits of this are:

  • Democratization code process automation
  • In-house talent that works with multiple technologies
  • Using best of integration tools
  • Leadership buy-in for coordinated multi-technology approach

Key takeaway: Look at automation, AI, analytics and low-code platforms as complementary technologies and services that can be mixed and matched to exponentially improve progress towards specific business goals.

3. Enterprise demand is growing

Enterprise companies have a need for more employees in AI related roles. From the five large enterprises that were interviewed, companies employ 375 employees in AI on average and spend about $75m on AI talent. All enterprise companies expect their AI talent pool to grow between 500 and 600 employees within the next three years.

Key takeaway: companies are expecting their AI investments to increase substantially over the next 3 years.

4. New organizational capabilities are critical

By 2022 employees need to re-scale or upscale (54%) because of AI Enterprises are starting to look at more than just hiring AI talent. According to KPMG enterprises are trying out several initiatives:

  • Connect a Chief Innovation Officers to lead overall AI strategy (50%)
  • Have a Line of Business leader take a leading role in AI strategy and deployment (40%)
  • Developing centers of excellence, (63%) (COEs) or CEO strategy (30%)

Key takeaway: success is about more than just getting the technology right

5. Internal governance emerging as key area

The enterprises believe that strong governance around AI drives better outcomes, helps avoid mistakes, and builds trust and credibility for AI projects. KPMG talks about creating an AI compass to have the right governance in place helps to scale AI solutions enterprise-wide as it requires consistent, purposeful, and responsible action across teams, The KPMG AI governance model is a thoughtful set of structures, capabilities, and processes that involve:

  • Strategy and policy
  • Education
  • Accountability
  • AI procedures and controls
  • Policies applied to data
  • Technology standards

Key takeaway: effective governance is challenging but crucial — companies should approach governance as an enterprise-wide operation.

6. The need to control AI

According to KPMG only 25 to 30% of enterprises has a robust control framework in place to drive trust and transparency in AI. It’s important to create and end-to-end AI lifecycle with a control and governance framework to prevent the system from producing faulty or incorrect data. There is a big risk attached to continuously-learning AI systems that produce bias or can’t trace back to where and how a decision was made. The cost of this can be tremendous; lost revenue, brand damage, fines from compliance issues, and ethical concerns.

KPMG suggested that AI systems should anchor and manage four pillars of trust; integrity, explainability, fairness, and resilience.

Key takeaway: controlling AI is imperative — this requires digital tooling, frameworks, and monitoring into AI performance, risk, and compliance.

7. Rise of AI-as-a-Service

The rise of companies building their own AI and offering it as a third-party solution to other companies has been democratizing the access to AI for companies that don’t have this specific technology and talent in-house.

Although AI is now more accessible KPMG emphasized that companies need a foundation of internal AI capabilities to effectively use a AI-as-a-service solution and take into consideration to what degree companies want to build their own AI (create a competitive advantage) or buy (use) a service (just give data and get results back)

Key takeaway: companies should design their AI strategies with ‘as-a-service’ models, customization, and flexibility in mind to take advantage of existing technology depending on the company needs.

8. AI could shift the competitive landscape

According to KMPG companies do believe AI can be of critical use to their business and give them a competitive advantage. However, according to their survey business leaders are worried about the AI learning curve and level of investment needed to get there as well as the optimal way to divide investments across AI priorities (organizational capital, technology, and data)

The survey KPMG held showed some interesting results on AI implementation:

  • Mature companies spend almost 10x more money on AI capabilities then early stage companies (enterprise wide vs business functions)
  • Companies investing in AI report improving productivity and automation as key drivers for using AI (Resulting in an average productivity increase of 15%)
  • Most companies only develop AI capabilities for back-office operations instead of the entire organization including front office, product innovation, and customer engagement

Key takeaway: make AI part of the overall strategy and view it as a competitive differentiator, not just for optimizing company processes. It is important to spend the time and money on training the AI and AI-people

Panel — Paving the rocky road of AI: What ethical challenges do we need to be addressing when considering AI in our business?

VP of Product from Figure Eight, Alyssa Sympson and Head of West Coast Operations from BLADE Urban Air Mobility, Shivani Parikha, discussed the ethical challenges of considering adopting AI in businesses with moderator Lloyd Danzig, Chairman and Founder of ICED. (AI)

Danzig asked the panel what type of ethical challenges we need to address when considering AI in our business. Sympson said that unintended or unwanted bias would be negative effects. She accidentally built a system with unintended bias in it. Which, according to her happens if you don’t pay a lot of attention to training data specifics.

As a follow-up up on this bias in training data the moderator asked how it keeps coming back.

Sympson explained that in the beginning, people who build the AI didn’t have a clear objective. She gave an example where the voice assistant Alexa understood her husband better because Amazon has more male training data. She wants to make sure that the dataset is representative for everyone using it.

The moderator then asked Sympson if she has seen ways or people mitigate those results or consequences of bias in data sets. She said its important to first declare which markets to serve, define the group of people so the training data reflects that, benchmark the data over time, and retrain the model where needed.

The moderator switched to question Parikha. He talked about the notion of privacy and how people interact with it. He said that although privacy is a basic human right, people tend to gradually have the tendency to give up more data (privacy) for small conveniences. He asked why the panel thinks people are so willing to give up data.

According to Parikha it’s all about personal experience. He said it’s easy to give up data for small conveniences until they have a bad experience. Once you have been affected people are much less willing to give away their data.

The moderator pitched gave his own view by saying we have to pay attention to it and that it’s more a generational problem because new generations (e.g. his son) doesn’t have a problem giving up his data. He thinks the government should play a role to create a framework to help companies use AI smart while preserving customer’s privacy. He then asked the panel what they would say to someone who considers themselves to be thoughtful and is happy to trade info for free product access.

Parikha would tell that person:

Realize you are the product. A lot of people don’t realize how much info they give up. Pay attention to what is collected, to whom and for what purpose’.

Sympson is missing a framework in US. She gave the example:

‘Facebook knew she was pregnant before her mother did while she didn’t post anything about it’.

The moderator said it’s a misconception to think AI is only used by benevolent actors — he asked how the panel sees or fears the way in which AI in bad actors can distort what truth and reality are.

Parikha said it’s a scary think to think of a society that doesn’t know the difference, and that AI has dual purposes. Sympson added that in her previous role at IBM in vision recognition she was terrified and learning as she was going. She received weird requests to use the system for. Finally, her mentor said AI is like a knife, make it easy to do good and hard to do bad things. Build teams that are international and divers to ensure AI represents the world and target audience.

Both panelists agreed that there needs to be a lot of thought into ethical implications of developing AI.

--

--

Bart Teeuwen
0 Followers

Marketing & Data Analytics | Transform data into action