0% found this document useful (0 votes)
251 views76 pages

AWS Portfolio

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 76

Roy Ben-Alta (Big Data Analytics Practice Lead, AWS)

Thomas Barthelemy (Software Engineer, Coursera)

November 13, 2014 | Las Vegas | NV


Collect AWS Direct AWS Amazon
Connect Import/Export Kinesis

Store
Amazon Amazon
DynamoDB
S3 Glacier

Process & Analyze Amazon Amazon Amazon


EMR Redshift EC2
Billing CSV CRM
DB files DB

Extract Transform Load

DB

ETL DWH
DB
DB
Data Amazon
Feeds RDS
Amazon S3 as Amazon EC2 Amazon EMR
Landing Zone

AWS Data Pipeline

Amazon Amazon
Amazon S3 as
Redshift DynamoDB
Data Lake
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-
program-pipeline.html
Amazon S3 as Data Logs Amazon EMR Amazon Redshift – Visualization
Lake/Landing Zone as ETL Grid Production DWH
Define
aws datapipeline create-pipeline –name myETL --unique-id token
--output: df-09222142C63VXJU0HC0A
Import
aws datapipeline put-pipeline-definition --pipeline-id df-
09222142C63VXJU0HC0A --pipeline-definition /home/repo/etl_reinvent.json
Activate
aws datapipeline activate-pipeline --pipeline-id df-09222142C63VXJU0HC0A
copy weblogs
from 's3://staging/'
credentials 'aws_access_key_id=<my access
key>;aws_secret_access_key=<my secret key>'
delimiter ','
Coursera Big Data Analytics Powered
by AWS
Thomas Barthelemy
SWE, Data Infrastructure
[email protected]
Overview
● About Coursera
● Phase 1: Consolidate data
● Phase 2: Get users hooked
● Phase 3: Increase reliability
● Looking forward
Coursera at a Glance
About Coursera
● Platform for Massive Open Online Courses
● Universities create the content
● Content free to the public
Coursera Stats
● ~10 million learners
● +110 university partners
● >200 courses open now
● ~170 employees
The value of data at Coursera
● Making strategic and tactical decisions
● Studying pedagogy
Becoming More Data-Driven
● Since the early days, Coursera has understood the value of data
o Founders came from machine learning
o Many of the early employees researched with the founders
● Cost of data access was high
o each analysis requires extraction, pre-processing
o data only available to data scientists + engineers
Phase 1:
Consolidate data
Sources
● MySQL
SQL
o Site data
o Course data sharded across multiple database
Sharded SQL
Classes
Classes
● Cassandra increasingly used for course data
● Logged event data
NoSQL
● External APIs

Logs

External
APIs
(Obligatory meme)
What platforms to use?
● Amazon Redshift had glowing recommendations
● AWS Data Pipeline has native support for various Amazon services
ETL development was slow :(
● Slow to do the following in the console:
o create one pipeline
o create similar pipelines
o update existing pipelines
Solution: Programmatically create pipelines
● Break ETL into reusable steps
o Extract from variety of sources
o Transform data with Amazon EMR, Amazon EC2, or within Amazon Redshift
o Load data principally into Amazon Redshift
● Use Amazon S3 as intermediate state for Amazon EMR- or Amazon
EC2-based transformations
● Map steps to set of AWS Data Pipeline objects
Example step: extract-rds
● Parameters: hostname, database name, SQL
● Creates pipeline objects:

S3 Node can be
used by many
other step types
Example load definition
steps:
- type: extract-from-rds Extract data from
sql: | SELECT instructor_id, course_id, rank Amazon RDS
FROM courses_instructorincourse;
hostname: maestro-read-replica
database: maestro

- type: load-into-staging-table
Load intermediate
table: staging.maestro_instructors_sessions
table in Amazon
- type: reload-prod-table Redshift
source: staging.maestro_instructors_sessions
Reload target table
destination: prod.instructors_sessions with new data
ETL – Amazon RDS

Amazon
SQL Amazon
S3
Redshift

Extract Load
ETL – Sharded RDS

Load
Extract Amazon
S3
Amazon
EMR

Amazon Amazon Amazon Amazon


SQL S3 EMR S3 Redshift

Amazon
EMR
Amazon
S3

Transform
ETL – Logs

Load
Extract

Amazon
S3

Amazon Amazon Amazon


EC2 S3 Redshift

Amazon Amazon
Logs
EMR S3

Transform
Reporting Model, Dec 2013
Reporting Model, Sep 2014
AWS Data Pipeline
● Easily handles starting/stopping of resources
● Handles permissions, roles
● Integrates with other AWS services
● Handles “flow” of data, data dependencies
Dealing with large pipelines
● Monolithic pipelines hard to maintain
● Moved to making pipelines smaller
o Hooray modularity!
● If pipeline B depended on pipeline A, just schedule it later
o Add a time buffer just to be safe
Setting cross-pipeline dependencies
● Dependencies accomplished using a script that would wait until
dependencies finished
o ShellCommandActivity to the rescue
The beauty of ShellCommandActivity
● You can use it anywhere
o Accomplish tasks that have no corresponding activity type
o Override native AWS Data Pipeline support if it does not meet your needs
ETL library
● Install on machine as first step of each pipeline
o With ShellCommandActivity
● Allows for even more modularity
Phase 2:
Get users hooked
We have data. Now what?
● Simply collecting data will not make a company data-driven
● First step: make data easier for the analysts to use
Certifying a version of the truth
● Data Infrastructure team creates a 3NF model of the data
● Model is designed to be as interpretable as possible
Example: Forums

● Full-word names for database objects


● Join keys made obvious through universally-
unique column names
o e.g. session_id joins to session_id
● Auto-generated ER-diagram, so
documentation is up-to-date
Some power users need data faster!
● Added new schemas for developers to load data
● Help us to remain data driven during early phases of…
o Product release
o Analyzing new data
Lightweight SQL interface
● Access via browser
o Users don’t need database credentials or special software
● Data exportable as CSV
● Auto-complete for DB object names
● Took ~2 days to make
Lightweight SQL interface
But what about people who don’t know SQL?
● Reporting tools to the rescue!
● In-house dashboard framework to support:
o Support
o Growth
o Data Infrastructure
o Many other teams...
● Tableau for interactive analysis
But what about people who don’t know SQL?
Phase 3:
Increase reliability
If you build it, they will come
● Built ecosystem of data consumers
o Analysts
o Business users
o Product team
Maintaining the ecosystem
● As we accumulate consumers, need to invest effort in...
o Model stability
o Data quality
o Data availability
Persist Amazon Redshift logs
to keep track of usage
● Leverage database credentials to understand usage patterns
● Know whom to contact when
o Need to change the schema
o Need to add a new set of tables similar to an existing set (e.g. assessments)
Data Quality
● Quality of source systems (GIGO)
o Encourage source system to fix issues
● Quality of transformations
o Responsibility of the data infrastructure team
Quality Transformations
● Automated QA checks as part of ETL
o Again, use ShellCommandActivity
● Key factors
o Counts
o Value comparisons
o Modeling rules (primary keys)
Ensuring High Reliability
● Step retries catch most minor hiccups
● On-call system to recover failed pipelines
● Persist DB logs to keep track of load times
o Delayed loads can also alert on-call devs
o Bonus: users know how fresh the data is
o Bonus: can keep track of success rate
● AWS very helpful in debugging issues
Looking Forward
Current bottlenecks
● Complex ETL
o How do we easily ETL complex, nested structures?
● Developer bottleneck
o How do we allow users to access raw data more quickly?
Amazon S3 as principal data store
● Amazon S3 to store raw data as soon as it’s produced (data lake)
● Amazon EMR to process data
● Amazon Redshift to remain as analytic platform
Thank you!
Want to learn more?
● github.com/coursera/dataduct
● http://tech.coursera.org
[email protected]
http://blogs.aws.amazon.com/bigdata/

http://docs.aws.amazon.com/datapipeline/lates
t/DeveloperGuide/dp-concepts-schedules.html
http://bit.ly/awsevals

You might also like