MIS Review Case Study
MIS Review Case Study
1. Identify and describe the security and control weaknesses discussed in this case.
On September 7, 2017 Equifax reported that from mid-May through July 2017 hackers gained
access to some of its systems and potentially the personal information of about 143 million U.S.
consumers, including Social Security numbers and driver’s license numbers. Credit card
numbers for 209,000 consumers and personal information used in disputes for 182,000 people
were also compromised.
Equifax reported the breach to law enforcement and also hired a cybersecurity firm to
investigate. The size of the breach, importance, and quantity of personal information
compromised by this breach are considered unprecedented.
The Equifax breach was especially damaging because of the amount of sensitive personal and
financial data stored by Equifax that was stolen, and the role such data play in securing
consumers’ bank accounts, medical histories, and access to financing. In one swoop the hackers
gained access to several essential pieces of personal information that could help attackers
commit fraud.
Analyses earlier in 2017 performed by four companies that rank the security status of companies
based on publicly available information showed that Equifax was behind on basic maintenance
of web sites that could have been involved in transmitting sensitive consumer information.
Cyberrisk analysis firm Cyence rated the danger of a data breach at Equifax during the next 12
months at 50 percent. It also found the company performed poorly when compared with other
financial-services companies. The other analyses gave Equifax a higher overall ranking, but the
company fared poorly in overall web-services security, application security, and software
patching.
A security analysis by Fair Isaac Corporation (FICO), a data analytics company focusing on
credit scoring services, found that by July 14 public-facing web sites run by Equifax had expired
certificates, errors in the chain of certificates, or other web-security issues.
Management: Competitors privately observed that Equifax did not upgrade its technological
capabilities to keep pace with its aggressive growth. Equifax appeared to be more focused on
growing data it could commercialize.
The findings of the outside security analyses appear to conflict with public declarations by
Equifax executives that cybersecurity was a top priority. Senior executives had previously said
cybersecurity was one of the fastest-growing areas of expense for the company. Equifax
executives touted Equifax’s focus on security in an investor presentation that took place weeks
after the company had discovered the attack.
Organization: Equifax bought companies with databases housing information about consumers’
employment histories, savings, and salaries, and expanded internationally. The company bought
and sold pieces of data that enabled lenders, landlords, and insurance companies to make
decisions about granting credit, hiring job seekers, and renting an apartment.
The data breach exposed Equifax to legal and financial challenges, although the regulatory
environment is likely to become more lenient under the current presidential administration. It
already is too lenient. Credit reporting bureaus such as Equifax are very lightly regulated. Given
the scale of the data compromised, the punishment for breaches is close to nonexistent.
Technology: The hack involved a known vulnerability in Apache Struts, a type of open-source
software Equifax and other companies use to build web sites. This software vulnerability was
publicly identified in March 2017, and a patch to fix it was released at that time. That means
Equifax had the information to eliminate this vulnerability two months before the breach
occurred. It did nothing.
Weaknesses in Equifax security systems were evident well before the big hack. A hacker was
able to access credit-report data between April 2013 and January 2014. The company discovered
that it mistakenly exposed consumer data as a result of a “technical error” that occurred during a
2015 software change. Breaches in 2016 and 2017 compromised information on consumers’ W-2
forms that were stored by Equifax units. Additionally, Equifax disclosed in February 2017 that a
“technical issue” compromised credit information of some consumers who used identity-theft
protection services from LifeLock.
Hackers gained access to Equifax systems containing customer names, Social Security numbers,
birth dates, and addresses. These four pieces of data are generally required for individuals to
apply for various types of consumer credit, including credit cards and personal loans. Criminals
who have access to such data could use it to obtain approval for credit using other people’s
names. Credit specialist and former Equifax manager John Ulzheimer calls this is a “nightmare
scenario” because all four critical pieces of information for identity theft are in one place.
Stolen personal data will be available to hackers on the Dark Web for years to come.
Governments involved in state-sponsored cyberwarfare are able to use the data to populate
databases of detailed personal and medical information that can be used for blackmail or future
attacks.
4 How can future data breaches like this one be prevented? Explain your answer.
There will be hacks—and afterward, there will be more. Companies need to be even more
diligent about incorporating security into every aspect of their IT infrastructure and systems
development activities. To prevent data breaches such as Equifax’s, organizations need many
layers of security controls. They need to assume that prevention methods are going to fail.
As data breaches rise in significance and frequency, the government is proposing new legislation
that would require firms to report data breaches within specific time frames and set standards for
data security.
There are other measures every organization, public and private can and should take to secure
their systems and information. Section 8.4, What are the most important tools and technologies
for safeguarding information resources, of this chapter provides a list:
Many security experts believe that U.S. cybersecurity is not well-organized. The FBI and
Department of Homeland Security released a “cyber alert” memo describing lessons learned
from other hacks. The memo lists generally recommended security practices for companies to
adopt, including encrypting data, activating a personal firewall at agency workstations,
monitoring users’ online habits, and blocking potentially malicious sites.
Case Study: Clemens Food Group Delivers with New Enterprise Applications
1 Why would supply chain management be so important for Clemens Food Group?
Clemons Food is a vertically coordinated company that includes antibiotic-free hog farming,
food production, logistical services, and transportation. Using a responsive pork production
system, the company focuses on supplying the highest-quality products to its partners as well as
advanced solutions that simplify partners’ operations.
The Clemens Food Group raises and processes about five million hogs per year, managing
procurement, production, and logistics services from birth to finished food products. Clemens
has 3,350 employees.
For a company in the perishable goods industry such as Clemens Food to be profitable, it must
have a firm grasp on the timeliness and accuracy of orders and very precise information about
the status of its products and warehouse activities throughout its network of farms and
production facilities. Accuracy in determining yields, costs, and prices in a wildly fluctuating
market can make a difference of millions of dollars.
2 What problem was the company facing. What management, organization, and technology
factors contributed to these problems?
Management: Clemens Food’s legacy systems were no longer able to keep up with production
and support future growth. Management realized the company needed a new platform to provide
better visibility into production, more efficient planning, and tighter control of available-to-
promise processes.
Being in the perishables industry made it imperative for Clemens Food to have master data in
place when the new system went live to avoid disruptions to production or shipping capabilities.
Organization: Clemens Food also wanted real-time information about plant profitability,
including daily profitability margins on an order-by-order basis.
Sales forecasting in the meat-processing industry has unique challenges because of the many
variables from dealing with perishable products, raw material by-products, and seasonality
considerations. Every Thursday, Clemens Food ran a sales report on its old legacy system that
showed the previous week’s sales. Information about actual profitability was delayed.
Now, the company can measure profitability on an invoice-by-invoice basis, and it knows the
profitability of each order right away. Prices change daily in the perishable food business, so the
importance of having real-time information about profitability can’t be overstated.
Technology: Clemens Food created a five-year plan to modernize its IT infrastructure with an
integrated platform for systems to optimize its supply network and improve scheduling,
optimization, and margin visibility in its multi-business operations. The plan gained steam in
2014 when Clemens Food announced it would develop a third pork processing plant comprising
550,000 square feet in Coldwater Township, Michigan.
The addition of this facility could significantly increase volume and double revenue if it was
backed by a more modern IT platform. Clemens Food’s existing ERP system needed to be
replaced by one that could handle increased volume and multi-plant complexities.
3 Was SAP S/4HANA a good solution for Clemens Food Group? Explain your answer.
Yes, the SAP S/4HANA was an excellent solution for the food group.
SAP S/4HANA is a business suite that is based on the SAP HANA in-memory computing
platform. It features enterprise resource planning software meant to cover all day-to-day
processes of an enterprise and also integrates portions of SAP Business Suite products for
customer relationship management, supplier relationship management, and supply chain
management. SAP S/4HANA is available in on-premises, cloud, and hybrid computing
platforms.
The company now has a single “source of truth,” and data are integrated, whereas in the past it
had to deal with similar data spread over multiple systems. With a single source of truth and the
ability to put information at people’s fingertips, Clemens Foods can create dashboards and focus
on making reporting far simpler than it’s ever been.
Organization: Clemens Food selected itelligence Group implementation consultants to help with
its master data and other migration issues. itelligence Group is a global SAP Platinum Partner
with over 25 years of experience. It offers a full range of services from implementation
consulting to managed services for its clients. Clemens identified itelligence as a partner with
deep SAP food-specific knowledge and experience, including fresh and processed meat.
itelligence Group had a proprietary Hog Procurement solution available for Clemens that helped
deliver an on-time and on-budget project with minimal disruption to the business. itelligence
Group had experience guiding other meat-processing companies through similar largescale
implementations. The company wanted intelligence to act as business process experts to help
Clemens Food re-examine the way it did things. Clemens Food followed itelligence’s
suggestions about modifications, budget management, the overall testing cycle, and the
philosophy of implementation.
Technology: By the time Clemens Food migrated to SAP S/4HANA, its legacy ERP system was
linked to more than 70 applications. One especially valuable piece of project guidance from
itelligence was to encourage project members to see the implementation as being led by the
business rather than just an IT project. Clemens Foods started out with the project being IT-led,
but after five months assigned internal leaders of the business to be the project leads. That switch
forced the project team to be more objective through all the different testing phases. After each
testing cycle, they had objective scoring from the dedicated team leads who viewed the project as
a business process improvement. That helped the project team move closer to a finished product,
rather than waiting until going live to find out it missed the mark. Including the business as equal
partners when updates were instituted helped ensure that customizations were avoided.
1 How did social media support Nasty Gal’s business model? To what extent was Nasty Gal
a “social” business?
Nasty Gal’s styling was edgy and fresh—a little bit rock and roll, a little bit disco, modern, but
never hyper-trendy. Eight years after its founding, Nasty Gal had sold more than $100 million in
new and vintage clothing and accessories, employed more than 350 people, had more than a
million fans on Facebook and Instagram, and was a global brand. It looked like a genuine e-
commerce success story.
Sophia Amoruso, who launched Nasty Gal, was a heavy user of social tools to promote her
business. When she first started out, she used MySpace, where she attracted a cult following of
more than 60,000 fans. The company gained traction on social media with Nasty Gal’s aesthetic
that could be both high and low, edgy and glossy.
Amoruso took customer feedback very seriously and believed customers were at the center of
everything Nasty Gal did. When she sold on eBay, she learned to respond to every customer
comment to help her understand precisely who was buying her goods and what they wanted.
Amoruso said that the content Nasty Gal customers created was always a huge part of the Nasty
Gal brand. It was very important to see how customers wore Nasty Gal’s pieces and the types of
photographs they took. They were inspiring.
Social media is built on sharing, and Nasty Gal gave its followers compelling images, words, and
content to share and talk about each day. It could be a crazy vintage piece, a quote, or a behind-
the-scenes photo. At most companies, the person manning the Twitter and Facebook accounts is
far removed from senior management. Amoruso did not always author every Nasty Gal tweet,
but she still read every comment. If the customers were unhappy about something, she wanted to
hear about it right away. At other businesses, it might take months for customer feedback to filter
up to the CEO. When Nasty Gal first joined Snapchat, Amoruso herself tested the water with a
few Snaps, and Nasty Gal followers responded in force.
2 What management, organization, and technology problems were responsible for Nasty
Gal’s failure as a business?
Management: In June 2008, Amoruso moved Nasty Gal Vintage off eBay and onto its own
destination web site, www. nastygal.com. In 2012, Nasty Gal began selling clothes under its own
brand label and also invested $18 million in a 527,000-square-foot national distribution center in
Kentucky to handle its own shipping and logistics. Venture capitalists Index Ventures provided
at least $40 million in funding. Nasty Gal opened a brick-and-mortar store in Los Angeles in
2014 and another in Santa Monica in 2015.
Nasty Gal experienced tremendous growth in its early years, being named INC Magazine’s
fastest growing retailer in 2012 and earning number one ranking in Internet Retailer’s Top 500
Guide in 2016. By 2011, annual sales hit $24 million and then nearly $100 million in 2012.
However, sales started dropping, $85 million in 2014 and then $77 million in 2015.
Nasty Gal wasted money on things that didn’t warrant large expenditures. The company
quintupled the size of its headquarters by moving into a 50,300-square-foot location in
downtown Los Angeles in 2013—far more space than the company needed, according to
industry experts. The company also opened a 500,000-square-foot fulfillment center in Kentucky
to handle its own distribution and logistics as well as two brick-and-mortar stores in Los Angeles
and Santa Monica. Even in the hyper-trendy fashion business, companies have to closely monitor
production, distribution, and expenses for operations to move products at a scale big enough to
make a profit. Nasty Gal’s mostly young staff focused too much on the creative side of the
business.
While it was growing, Nasty Gal built its management team, hiring sizzling junior talent from
retail outlets such as Urban Outfitters. But their traditional retail backgrounds clashed with the
startup mentality. As Nasty Gal expanded, Amoruso’s own fame also grew, and she was
sidetracked by other projects. Employees complained about Amoruso’s management style and
lack of focus.
Amoruso resigned as chief executive in 2015 but remained on Nasty Gal’s board of directors
until the company filed for Chapter 11 bankruptcy in November 2016. Between 2015 and 2016,
Nasty Gal raised an additional $24 million in equity and debt financing from venture-focused
Stamos Capital Partners LP and Hercules Technology Growth Capital Inc. Even though the
funding helped Nasty Gal stay afloat, the company still had trouble paying for new inventory,
rent, and other operating expenses.
1 What are the management, organizational, and technology challenges posed by self-
driving car technology?
Management: Autonomous vehicle technology has reached a point where no automaker can
ignore it. Every major auto maker is racing to develop and perfect autonomous vehicles,
believing that the market for them could one day reach trillions of dollars.
Organizational: There’s still plenty that needs to be improved before self-driving vehicles
could safely take to the road. Autonomous vehicles are not yet able to operate safely in all
weather conditions. Heavy rain or snow can confuse current car radar and lidar systems-
autonomous vehicles can’t operate on their own in such weather conditions. These vehicles also
have trouble when tree branches hang too low or bridges and roads have faint lane markings. On
some roads, self-driving vehicles will have to make guidance decisions without the benefit of
white lines or clear demarcations at the edge of the road, including Botts’ Dots (small plastic
markers that define lanes). Botts’ Dots are not believed to be effective lane-marking for
autonomous vehicles.
Technology: A car that is supposed to take over driving from a human requires a very powerful
computer system that must process and analyze large amounts of data generated by myriad
sensors, cameras, and other devices to control and adjust steering, accelerating, and braking in
response to real-time conditions. Key technologies include: sensors, cameras, lidars, GPS, radar,
computer, machine learning, deep learning, computer vision technology and maps.
Self-driving car companies are notorious for overhyping their progress. Should we believe them?
At this point, the outlook for them is clouded.
Self-driving cars require new ecosystems to support them, much as today’s cars are dependent on
garages, gasoline stations, and highway systems. New roads, highways, and automotive supply
chains will have to be rebuilt for self-driving cars. The big auto makers that build millions of cars
a year rely on complex, precise interaction among hundreds of companies, including automotive
component suppliers and the services to keep cars running. They need dealers to sell the cars, gas
pumps or charging stations to fuel them, body shops to fix them, and parking lots to store them.
Manufacturers of autonomous vehicles need to rethink interactions and processes built up over a
century. The highway infrastructure will need to change over time to support autonomous
vehicles. Waymo has partnered with Avis to take care of its fleet of driverless minivans in
Arizona, and it’s working with a startup called Trov to insure their passengers. GM is retooling
one of its plants to produce Chevrolet Bolts without steering wheels or pedals.
3 What ethical and social issues are raised by self-driving car technology?
In March 2018, a self-driving Uber Volvo XC90 operating in autonomous mode struck and
killed a woman in Tempe, Arizona. Since the crash, Arizona has suspended autonomous vehicle
testing in the state, and Uber is not renewing its permit to test self-driving cars in California. The
company has also stopped testing autonomous cars in Pittsburgh and Toronto and it’s unclear
when it will be revived.
Even before the accident, Uber’s self-driving cars were having trouble driving through
construction zones and next to tall vehicles like big truck rigs. Uber’s drivers had to intervene far
more frequently than drivers in other autonomous car projects. The Uber accident raised
questions about whether autonomous vehicles were even ready to be tested on public roads and
how regulators should deal with this.
While proponents of self-driving cars like Tesla’s Elon Musk envision a self-driving world
where almost all traffic accidents would be eliminated, and the elderly and disabled could travel
freely, most Americans think otherwise. A Pew Research Center survey found that most people
did not want to ride in self-driving cars and were unsure if they would make roads more
dangerous or safer. Eighty-seven percent wanted a person always behind the wheel, ready to take
over if something went wrong.
Some pundits predict that in the next few decades, driverless technology will add $7 trillion to
the global economy and save hundreds of thousands of lives. At the same time, it could devastate
the auto industry along with gas stations, taxi drivers, and truckers. People might stop buying
cars because services like Uber using self-driving cars would be cheaper.
This could cause mass unemployment of taxi drivers and large reductions in auto sales. It would
also cut down the need for many parking garages and parking spaces, freeing up valuable real
estate for other purposes. More people might decide to live further from their workplaces
because autonomous vehicles linked to traffic systems would make traffic flow more smoothly
and free riders to work, nap, or watch video while commuting.
Some people will prosper. Most will probably benefit, but many will be left behind. Driverless
technology is estimated to change one in every nine U.S. jobs, although it will also create new
jobs. Another consideration is that the tremendous investment in autonomous vehicles, estimated
to be around $32 billion annually, might be better spent on improving public transportation
systems like trains and subways. Does America need more cars in sprawling urban areas where
highways are already jammed?
4 Will cars really be able to drive themselves without human operators? Should they?
How can autonomous vehicles communicate with humans and other machines to let them know
what they want to do? Researchers are investigating whether electronic signs and car-to-car
communication systems would solve this problem. There’s also what’s called the “trolley
problem”: In a situation where a crash is unavoidable, how does a robot car decide whom or
what to hit? Should it hit the car coming up on its left or a tree on the side of the road?
A computer-driven car that can handle any situation as well as a human under all conditions is
decades away at best.
1 How is GE changing its business strategy and business model? What is the role of
information technology in GE’s business?
The company is transitioning to a much more technology-centric business strategy and business
model. GE is selling off its division that makes refrigerators and microwave ovens along with
most of GE Capital financial services to focus on electric power generators, jet engines,
locomotives, and oil-refining gear and software to connect these devices to the cloud. GE is
putting its money on the technology that controls and monitors industrial machines as well as
software-powered, cloud-based services for analyzing and deriving value from the data. GE
hopes this strategy will turn it into a major software company.
3 Describe three kinds of decisions that can be supported using Predix. What is the value to
the firm of each of those decisions? Explain.
The foundation for all of GE’s Industrial Internet (IoT) applications is Predix, a software
platform launched in 2015 to collect data from industrial sensors and analyze the information in
the cloud. Predix can run on any cloud infrastructure. The platform has open standards and
protocols that allow customers to more easily and quickly connect their machines to the
Industrial Internet. The platform can accommodate the size and scale of industrial data for every
customer at current levels of use, but it also has been designed to scale up as demand grows.
Predix can offer apps developed by other companies as well as GE, is available for on-premises
or cloud-based deployment, and can be extended by customers with their own data sources,
algorithms, and code. Customers may develop their own custom applications for the Predix
platform. GE is also building a developer community to create apps that can be hosted on Predix.
Predix is not limited to industrial applications. It could be used for analyzing data in healthcare
systems, for example. GE now has a Health Cloud running on Predix. Data security is embedded
at all platform application layers, and this is essential for companies linking their operations to
the Internet.
GE currently uses Predix to monitor and maintain its own industrial products, such as wind
turbines, jet engines, and hydroelectric turbine systems. Predix is able to provide GE corporate
customers’ machine operators and maintenance engineers with real-time information to schedule
maintenance checks, improve machine efficiency, and reduce downtime. Helping customers
collect and use this operational data proactively would lower costs in GE service agreements.
When GE agrees to provide service for a customer’s machine, it often comes with a performance
guarantee. Proactive identification of potential issues that also takes the cost out of shop visits
helps the customer and helps GE.
In early 2013, GE began to use Predix to analyze data across its fleet of machines. By identifying
what made one machine more efficient or downtime-prone than another, GE could more tightly
manage its operations. For example, by using high performance analytics, GE learned that some
of its jet aircraft engines were beginning to require more frequent unscheduled maintenance. A
single engine’s operating data will only tell you there’s a problem with that engine. But by
collecting massive amounts of data and analyzing the data across its entire fleet of machines, GE
was able to cluster engine data by operating environment. The company found that the hot and
harsh environments in the Middle East and China caused engines to clog, heat up, and lose
efficiency, so they required more maintenance. GE found that engines had far fewer of these
problems if they were washed more frequently. Fleet analytics helped GE increase engine
lifetime and reduce engine maintenance.
GE wanted to go beyond helping its customers manage the performance of their GE machines to
managing the data on all of the machines in their entire operations, even those of other
manufacturers. Many customers use GE equipment alongside equipment from competitors. The
customer cares about running the whole plant, not just GE turbines, for example, and 80 percent
of the equipment in these facilities is not from GE. If, for example, an oil and gas customer has a
problem with a turbo compressor, a heat exchanger upstream from that compressor may be the
source of the problem, so analyzing data from the turbo compressor will only tell part of the
story. Customers therefore want GE to analyze non-GE equipment and help them keep their
entire plant running. GE is in discussions with some customers about managing sensor data from
all of the machine assets in their operation.
In November 2017, Jeff Flannery, who succeeded Immelt as GE’s CEO, announced that
spending on GE Digital and Predix would be cut by more than 25 percent, or $400 million.
Digital initiatives are still critical to the company nevertheless, and Flannery wants Predix to
generate $1 billion in annual revenue. However, Flannery wanted this accomplished via a “more
focused” strategy. In July 2018, the company announced it was seeking a buyer for key parts of
its digital unit.
These events demonstrate that GE greatly underestimated the challenges of creating all the
software needed for analyzing Internet of Things (IoT) data to improve business processes across
a wide range of industries. GE’s technical expertise lies in designing and manufacturing
machines like power jet engines, plant turbines, and medical imaging equipment and in creating
the specialized software to control machines in factory operations. It was too much of a stretch
for GE Digital to move quickly into cloud-based software to handle all kinds of sensor and
machine data and big data analytics for the entire Industrial Internet. GE also faced difficulties
adapting its own legacy applications to Predix. GE had many algorithms for monitoring its
machines, but they were written in different coding languages and resided on other systems in
GE businesses. This made converting the software to run on Predix time consuming and
expensive. Predix has been pared back to be primarily a set of software tools to help write
applications, as opposed to connecting to layers of code to automate data analysis. GE Digital
now focuses on selling products for specific industrial applications tailored to GE’s existing
industrial customers rather than an all-purpose operating system and platform for the wider
industrial world.