AI News, The DataRobot Customer Engagement Model

The DataRobot Customer Engagement Model

We want to help your business executives learn how to spot as many opportunities as possible for machine learning within your business and execute on them quickly and successfully.

Building a Learning Organization

Continuous improvement programs are sprouting up all over as organizations strive to better themselves and gain an edge.

Scholars too have jumped on the bandwagon, beating the drum for “learning organizations” and “knowledge-creating companies.” In rapidly changing businesses like semiconductors and consumer electronics, these ideas are fast taking hold.

Peter Senge, who popularized learning organizations in his book The Fifth Discipline, described them as places “where people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning how to learn together.”1 To achieve these ends, Senge suggested the use of five “component technologies”: systems thinking, personal mastery, mental models, shared vision, and team learning.

In a similar spirit, Ikujiro Nonaka characterized knowledge-creating companies as places where “inventing new knowledge is not a specialized activity…it is a way of behaving, indeed, a way of being, in which everyone is a knowledge worker.”2 Nonaka suggested that companies use metaphors and organizational redundancy to focus thinking, encourage dialogue, and make tacit, instinctively understood ideas explicit.

As a first step, consider the following definition: This definition begins with a simple truth: new ideas are essential if learning is to take place.

And GM, with a few exceptions (like Saturn and NUMMI), has had little success in revamping its manufacturing practices, even though its managers are experts on lean manufacturing, JIT production, and the requirements for improved quality of work life.

Organizations that do pass the definitional test—Honda, Corning, and General Electric come quickly to mind—have, by contrast, become adept at translating new knowledge into new ways of behaving.

Learning organizations are skilled at five main activities: systematic problem solving, experimentation with new approaches, learning from their own experience and past history, learning from the experiences and best practices of others, and transferring knowledge quickly and efficiently throughout the organization.

When a high-level group was formed to review Xerox’s organizational structure and suggest alternatives, it employed the very same process and tools.3 This activity involves the systematic searching for and testing of new knowledge.

Corning, for example, experiments continually with diverse raw materials and new formulations to increase yields and provide better grades of glass.

Chaparral Steel sends its first-line supervisors on sabbaticals around the globe, where they visit academic and industry leaders, develop an understanding of new work practices and technologies, then bring what they’ve learned back to the company and apply it to daily operations.

Allegheny Ludlum has perfected this juggling act: it keeps expensive, high-impact experiments off the scorecard used to evaluate managers but requires prior approvals from four senior vice presidents.

General Foods’s Topeka plant, one of the first high-commitment work systems in this country, was a pioneering demonstration project initiated to introduce the idea of self-managing teams and high levels of worker autonomy;

At the outset, Diggs assigned a small, multifunctional team the task of designing a “focused factory” dedicated to a narrow, newly developed product line.

The final investment, a total of $30 million, yielded unanticipated breakthroughs in reliability testing, automatic tool adjustment, and programmable control.

The change was visible at the highest levels, and it went down hard.” Once the first focused factory was running smoothly—it seized 25% of the market in two years and held its edge in reliability for over a decade—Copeland built four more factories in quick succession.

One expert has called this process the “Santayana Review,” citing the famous philosopher George Santayana, who coined the phrase “Those who cannot remember the past are condemned to repeat it.” Unfortunately, too many managers today are indifferent, even hostile, to the past, and by failing to reflect on it, they let valuable knowledge escape.

study of more than 150 new products concluded that “the knowledge gained from failures [is] often instrumental in achieving subsequent successes… In the simplest terms, failure is the ultimate teacher.”4 IBM’s 360 computer series, for example, one of the most popular and profitable ever built, was based on the technology of the failed Stretch computer that preceded it.

To ensure that the problems were not repeated, senior managers commissioned a high-level employee group, called Project Homework, to compare the development processes of the 737 and 747 with those of the 707 and 727, two of the company’s most profitable planes.

Like Boeing, Xerox studied its product development process, examining three troubled products in an effort to understand why the company’s new business initiatives failed so often.

Senior management invited ADL consultants from around the world to a two-day “jamboree,” featuring booths and presentations documenting a wide range of the company’s most successful practices, publications, and techniques.

British Petroleum went even further and established the post-project appraisal unit to review major investment projects, write up case studies, and derive lessons for planners that were then incorporated into revisions of the company’s planning guidelines.

At the heart of this approach, one expert has observed, “is a mind-set that…enables companies to recognize the value of productive failure as contrasted with unproductive success.

At Paul Revere Life Insurance, management requires all problem-solving teams to complete short registration forms describing their proposed projects if they hope to qualify for the company’s award program.

The company then enters the forms into its computer system and can immediately retrieve a listing of other groups of people who have worked or are working on the topic, along with a contact person.

According to one expert, “benchmarking is an ongoing investigation and learning experience that ensures that best industry practices are uncovered, analyzed, adopted, and implemented.”7 The greatest benefits come from studying practices, the way that work gets done, rather than results, and from involving line managers in the process.

Rather, it is a disciplined process that begins with a thorough search to identify best-practice organizations, continues with careful study of one’s own practices and performance, progresses through systematic site visits and interviews, and concludes with an analysis of results, development of recommendations, and implementation.

AT&T’s Benchmarking Group estimates that a moderate-sized project takes four to six months and incurs out-of-pocket costs of $20,000 (when personnel costs are included, the figure is three to four times higher).

Digital Equipment has developed an interactive process called “contextual inquiry” that is used by software engineers to observe users of new technologies as they go about their work.

Companies that approach customers assuming that “we must be right, they have to be wrong” or visit other organizations certain that “they can’t teach us anything” seldom learn very much.

A variety of mechanisms spur this process, including written, oral, and visual reports, site visits and tours, personnel rotation programs, education and training programs, and standardization programs.

In many organizations, expertise is held locally: in a particularly skilled computer technician, perhaps, a savvy global brand manager, or a division head with a track record of successful joint ventures.

A supervisor experienced in just-in-time production, for example, might move to another factory to apply the methods there, or a successful division manager might transfer to a lagging division to invigorate it with already proven ideas.

The CEO of Time Life used the latter approach when he shifted the president of the company’s music division, who had orchestrated several years of rapid growth and high profits through innovative marketing, to the presidency of the book division, where profits were flat because of continued reliance on traditional marketing concepts.

These are most effective when they allow experienced managers to distill what they have learned and diffuse it across the company in the form of new standards, policies, or training programs.

All workers were organized into small, self-managing teams with responsibility for work assignments, scheduling, problem solving and improvement, and peer review.

Drawing on his experiences at Chehalis, he developed a training program geared toward first-level supervisors that taught the behaviors needed to manage employees in a participative, self-managing environment.

As noted earlier, when Xerox introduced problem-solving techniques to its employees in the 1980s, everyone, from the top to the bottom of the organization, was taught in small departmental or divisional groups led by their immediate superior.

At the beginning of the 3-day course, each team received a request from a company officer to prepare a complete quality plan for their unit, based on the course concepts, within 60 days.

Called the Chairman’s Quality Award (CQA), it is an internal quality competition modeled on the Baldrige prize but with an important twist: awards are given not only for absolute performance (using the same 1,000-point scoring system as Baldrige) but also for improvements in scoring from the previous year.

Every year, it identifies every unit within the company that has scored at least 60% of the possible points in each award category and then publicizes the names of these units using written reports and electronic mail.

Traditionally, the solution has been “learning curves” and “manufacturing progress functions.” Both concepts date back to the discovery, during the 1920s and 1930s, that the costs of airframe manufacturing fell predictably with increases in cumulative volume.

Later studies expanded the focus, looking at total manufacturing costs and the impact of experience in other industries, including shipbuilding, oil refining, and consumer electronics.

Typically, learning rates were in the 80% to 85% range (meaning that with a doubling of cumulative production, costs fell to 80% to 85% of their previous level), although there was wide variation.

Drawing on the logic of learning curves, they argued that industries as a whole faced “experience curves,” costs and prices that fell by predictable amounts as industries grew and their total production increased.

To enjoy the benefits of experience, companies would have to rapidly increase their production ahead of competitors to lower prices and gain market share.

they assist in monitoring productivity, determining work flows and staffing levels, and setting prices and profit margins on new airplanes.

They focus on only a single measure of output (cost or price) and ignore learning that affects other competitive variables, like quality, delivery, or new product introductions.

They suggest only one possible learning driver (total production volumes) and ignore both the possibility of learning in mature industries, where output is flat, and the possibility that learning might be driven by other sources, such as new technology or the challenge posed by competing products.

When represented graphically, the performance measure (defect rates, on-time delivery, time to market) is plotted on the vertical axis, using a logarithmic scale, and the time scale (days, months, years) is plotted horizontally.

Because of their long gestation periods, half-life curves or any other measures focused solely on results are unlikely to capture any short-run learning that has occurred.

And the third step is performance improvement, with changes in behavior leading to measurable improvements in results: superior quality, better delivery, increased market share, or other tangible gains.

At PPG, a team of human resource experts periodically audits every manufacturing plant, including extensive interviews with shop-floor employees, to ensure that the concepts are well understood.

At its 1989 Worldwide Marketing Managers’ Meeting, Ford presented participants with a series of hypothetical situations in which customer complaints were in conflict with short-term dealer or company profit goals and asked how they would respond.

There must be time for reflection and analysis, to think about strategic plans, dissect customer needs, assess current work systems, and invent new products.

Opening up boundaries, with conferences, meetings, and project teams, which either cross organizational levels or link the company and its customers and suppliers, ensures a fresh flow of ideas and the chance to consider competing perspectives.

These are programs or events designed with explicit learning goals in mind, and they can take a variety of forms: strategic reviews, which examine the changing competitive environment and the company’s product portfolio, technology, and market positioning;

A consumer goods company, for example, might sponsor a study mission to Europe to learn more about distribution methods within the newly unified Common Market, while a high-technology company might launch a systems audit to review its new product development process.

Understanding Customer Experience

Anyone who has signed up recently for cell phone service has faced a stern test in trying to figure out the cost of carry-forward minutes versus free calls within a network and how it compares with the cost of such services as push-to-talk, roaming, and messaging.

So little confidence do consumers have in these electronic surrogates that a few weeks after the Web site www.gethuman.com showed how to reach a live person quickly at ten major consumer sites, instructions for more than 400 additional companies had poured in.

An excess of features, baited rebates, and a paucity of the personal touch are all evidence of indifference to what should be a company’s first concern: the quality of customers’ experiences.

Customer experience encompasses every aspect of a company’s offering—the quality of customer care, of course, but also advertising, packaging, product and service features, ease of use, and reliability.

Jones, “Lean Consumption,” HBR March 2005.) Moreover, in markets that are increasingly global, it is dangerous to assume that a given offering, communication, or other contact will affect faraway consumers the same way it does those at home.

Because a great many customer experiences aren’t the direct consequence of the brand’s messages or the company’s actual offerings, a company’s reexamination of its initiatives and choices will not suffice.

The customers themselves—that is, the full range and unvarnished reality of their prior experiences, and then the expectations, warm or harsh, those have conjured up—must be monitored and probed.

Such attention to customers requires a closed-loop process in which every function worries about delivering a good experience, and senior management ensures that the offering keeps all those parochial conceptions in balance and thus linked to the bottom line.

This article will describe how to create such a process, composed of three kinds of customer monitoring: past patterns, present patterns, and potential patterns.

(These patterns can also be referred to by the frequency with which they are measured: persistent, periodic, and pulsed.) By understanding the different purposes and different owners of these three techniques—and how they work together (not contentiously)—a company can turn pipe dreams of customer focus into a real business system.

Indirect contact most often involves unplanned encounters with representations of a company’s products, services, or brands and takes the form of word-of-mouth recommendations or criticisms, advertising, news reports, reviews, and so forth.

Such an encounter could occur when Google’s whimsical holiday logos pop up on the site’s home page at the inception of a search, or it could be the distinctive “potato, potato” sound of a Harley-Davidson motorcycle’s exhaust system.

Microsoft Windows, which is rich in features, may provide what a corporate IT director considers a positive experience, but many home users prefer Apple’s Macintosh operating system, which offers fewer features and configuration options.

A business’s “experience,” one might say, is its manner of functioning, and a B2B company helps its business customers serve their customers by solving their business problems, just as an effective business-to-consumer company fulfills the personal needs of its customers.

Whether it is a business or a consumer being studied, data about its experiences are collected at “touch points”: instances of direct contact either with the product or service itself or with representations of it by the company or some third party.

In its development of a new AIDS drug, Gilead Sciences provides a good example of how a failure to understand the experience and expectation component of a consumer segment’s dissatisfaction can turn into a failure to reach that segment.

Upon releasing the new medication, which had demonstrated advantages over existing ones, Gilead noticed that while sales to patients new to therapy were robust, sales to patients already undergoing treatment were growing far more slowly than expected.

CEOs may not actively deny the significance of customer experience or, for that matter, the tools used to collect, quantify, and analyze it, but many don’t adequately appreciate what those tools can reveal.

To put it starkly, the difference is that CRM captures what a company knows about a particular customer—his or her history of service requests, product returns, and inquiries, among other things—whereas customer experience data capture customers’ subjective thoughts about a particular company.

Employees accustomed to reading the marketing department’s dry analyses of CRM point-of-sale data easily grasp the distinction upon hearing a frustrated customer’s very words.

(For a detailed account of the difference between the two approaches, see the exhibit “CEM Versus CRM.”) Moreover, many CEOs don’t sufficiently appreciate the distinction between customer satisfaction, which they believe they have heavily documented, and customer experience, which always demands further investigation.

In contrast, executives who rose through finance, engineering, or manufacturing often regard managing customer experience as the responsibility of sales, marketing, or customer service.

Corporate leaders who would never tolerate a large gap between forecasted and actual revenues prefer to look the other way when company and customer assessments diverge, as they do in the Bain survey.

Now they can instantly combine it with data collected from CRM systems and other customer databases, conduct analyses of both individual and aggregate responses in real time, and then automatically route and track issues needing resolution.

That’s why Henry Ford said that if he asked his customers before building his first car how he could better meet their transportation needs, they would have said simply, “Give us faster horses.” Properly understood, the currents beneath the surface that direct the flow of customer experience data will indicate the shape of the next major transformation.

But it is a mistake to assign to customer-facing groups overall accountability for the design, delivery, and creation of a superior customer experience, thereby excusing those more distant from the customer from understanding it.

Dissatisfied with the status quo, customer service vice president Dan Gilbert, showing unusual initiative, distributed the experience data his department had collected to product development, which went to work on the problem.

(For a detailed breakdown of the three patterns, see the exhibit “Tracking Customer Experience: Persistent, Periodic, Pulsed.”) When companies monitor transactions occurring in large numbers and completed by individual customers, they are looking at past patterns.

Enterprise Rent-A-Car is supposed to ask every driver returning one of its vehicles, “Would you rent from Enterprise again?” Any new service a France Telecom customer receives is followed by a brief questionnaire on the quality of his or her experience.

Instead, information on a company’s key products and services should be gathered at scheduled intervals, or “periodically.” Hewlett-Packard and the consulting firm BearingPoint, for example, approach every key customer annually.

By initiating contact with different customers at different times throughout the year, BearingPoint has created an almost persistent data flow that does not depend on the completion of a given transaction, while permitting comparisons among customers on a range of issues.

Like the study Gilead conducted, such probes are outgrowths of strategies usually involving the targeting of particular customer segments and are therefore unscheduled, or “pulsed.” The findings are often used to inform the product development process.

Intuit’s founder, Scott Cook, uses Net Promoter scores for goal setting and engaging the organization’s attention, though he recognizes that a rising or falling score doesn’t begin to reveal what is driving the trend.

A subsequent, more comprehensive survey may show good experience with service response time but low overall ratings, triggering a special study to identify customers’ priorities among a range of service experience factors.

By the same token, corporate sanctions imposed on dealers who receive low scores shouldn’t be so harsh that retailers try to discourage customers from responding by offering to fix any problem on the spot.

After conducting a mini-audit of existing customer-experience programs, responsible parties, and results, it discovered that its vertical-market groups hardly went further than tracking leads and analyzing buying patterns.

Rather than spending a lot of time establishing formal customer experience goals or a detailed plan, the consultants argued for a “fast prototype” relationship survey of top customers.

Of the various questions settled on, two key ones were “How important to your purchasing decision was HiTouch’s brand and the service promise it seemed to make?” and “Do you believe HiTouch delivers the experience promised by its marketing and sales force?” The pilot survey included a summary metric that permitted HiTouch to compare responses by location, service platform, and vertical market.

The sales executive noticed that meetings about the pilot survey, in which salespeople fed customer experience information back to the customers themselves, differed from the typical sales call by shifting the dialogue away from the individual transaction and toward relationship development.

By the following quarter, every vertical-market team, having shown some customers the findings and described what the team planned to do about them, was ready to send out transaction surveys of customers’ experiences with service installation and repair.

At monthly operations meetings, vertical-market general managers reviewed key customer experience issues, and actions taken, before reviewing financials.

The company set up an executive dashboard to keep track of installation experience issues, but the disclosure of high-volume transaction information so upset the managers responsible that they never got around to resolving the underlying issues.

When employees observe senior managers persistently demanding experience information and using it to make tough decisions, their own decisions are conditioned by that awareness.

An adopter of customer experience management, the company had gathered data revealing that customers found a large disparity between actual and expected costs of ownership of Siebel 6, a sales-force automation tool based on a client-server architecture.

Human resources should put together a communications and training strategy that conveys the economic rationale for CEM and paints a picture of how it will alter work and decision-making processes.

Account teams must progress from annual surveys to detailed touch-point analysis, then translate present patterns of customer experience and issues gleaned from recent transactions into action plans that are shared with customers.

Although companies know a lot about customers’ buying habits, incomes, and other characteristics used to classify them, they know little about the thoughts, emotions, and states of mind that customers’ interactions with products, services, and brands induce.

3 Of The Toughest Interview Questions And How To Answer Them

Glassdoor regularly publishes tough questions submitted by users who’ve interviewed at a variety of companies.

The list ranges from the highly technical to the just plain unexpected, from questions attempting to draw on the candidate’s knowledge of algebra and geometry, to those trying to discern how well they work with others.

In order to help prepare you if you’re asked questions like these, we asked recruiters to share the best way to answer some of the most challenging ones.

Jayne Mattson, senior vice president of Keystone Associates, an executive outplacement and career coaching firm, agrees that when you are asked a hypothetical question, it’s best to answer it with a real example.

Then she would say something like: “I take great pride in the quality of my work, so if one of my coworkers prevented me from doing my best, I would use an honest, direct approach with that person.

All Customer Success Stories

It ensures web store visitors can find and buy the products they want easily regardless of traffic numbers thanks to a back-end infrastructure running on Amazon EC2 instances with Auto Scaling, an Amazon S3 data repository, and Amazon Kinesis to capture and process web-store clickstreams in real time.

Humans Need Not Apply

Discuss this video:

Facebook CEO Mark Zuckerberg testifies before Congress on data scandal

Facebook CEO Mark Zuckerberg will testify today before a U.S. congressional hearing about the use of Facebook data to target voters in the 2016 election.

How to write a good essay

How to write an essay- brief essays and use the principles to expand to longer essays/ even a thesis you might also wish to check the video on Interview ...

Transforming Work and the Customer Experience with Dynamic Case Management

Hear three unique perspectives on how to align your business around the customer using Dynamic Case Management (DCM). Matt Richard, CIO at Laborers ...

Harvard Medical School Class Day 2018

Harvard Medical School/Harvard School of Dental Medicine Class Day will take place Thursday, May 24, 2018. On this day of ceremony and celebration, ...

Working Together to Eliminate the Threat of Hepatitis B and C

Viral hepatitis, a group of infectious diseases, affects millions of people worldwide. Comments on this video are allowed in accordance with our comment policy: ...

Using Brief Interventions to Prevent Teen Dating Violence

In this moderated discussion with researchers, practitioners, and a policy advocate, we talk about the promise of brief interventions to reduce teen dating ...

Eighth Annual Emerging Markets Forum Business Powering Africa Forward

Presented by the Center for Global Business, sponsored in-part by CIBER, a title VI grant from the U.S. Department of Education, at the Robert H. Smith School ...

Economic Ideas Forum

On April 19, 2018 the John H. Schnatter Center for Economic Research at Purdue held the Economic Ideas Forum which included this fireside chat with Nobel ...

Get More Done: Time Management for Property Managers (Property Management Webinar)

What's the one thing that no property management professional ever has enough of? That's right, time! Watch this video to learn time management best practices ...