0% found this document useful (0 votes)
98 views5 pages

Technical Synopsis

This profile summarizes Pramit Maharjan's experience as a data scientist and machine learning engineer with over 5 years of experience. He has worked on projects involving predictive modeling, natural language processing, deep learning, big data analytics and data visualization. Pramit has strong skills in Python, SQL, and machine learning libraries like scikit-learn, TensorFlow and Keras. He has experience applying machine learning techniques to problems in various domains including fraud detection, sales forecasting, sentiment analysis and chatbots.

Uploaded by

jani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
98 views5 pages

Technical Synopsis

This profile summarizes Pramit Maharjan's experience as a data scientist and machine learning engineer with over 5 years of experience. He has worked on projects involving predictive modeling, natural language processing, deep learning, big data analytics and data visualization. Pramit has strong skills in Python, SQL, and machine learning libraries like scikit-learn, TensorFlow and Keras. He has experience applying machine learning techniques to problems in various domains including fraud detection, sales forecasting, sentiment analysis and chatbots.

Uploaded by

jani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 5

Pramit Maharjan

PROFESSIONAL SUMMARY
• 5+ years of experience in AI, Machine Learning and Deep Learning.
• Experience with large datasets of Structured and Unstructured data, Data
Visualization, Data Acquisition and modelling.
• Proficient with predictive modeling, NLP/ Deep Learning / Inferential statistics/
Graph/Data Validation.
• Hands on experience in data mining algorithms, Text analytics and approach.
• Experience in developing different Statistical Machine Learning, Text
Analytics, Data Mining solutions to various business generating and
problems data visualizations
• Worked on PANDAS, Numpy, Scikit-Learn, NLTK, Tensorflow, keras.
• Experience in Statistics, Regression- Linear, Logistic, Poisson, Binomial, Neural
Network
• Proficient in Machine Learning techniques (Decision Trees, Linear, Logistics, Random
Forest, Naïve Bayes, SVM, Bayesian, XG Boost, K-Nearest Neighbors) and Statistical
Modeling in Forecasting/ Predictive Analytics, Segmentation methodologies, Regression
based models, Hypothesis testing, Factor analysis/ PCA, Ensembles.
• Captured trends, seasonality patterns through time series models such as ARIMA, used
lag variables and sliding window techniques, feature engineered variables through
iterations. 
• Highly skilled in using visualization tools like matplotlib, seaborn, PowerBI for creating
dashboards.  
• Strong (OOP)programming expertise in Python, JAVA, and strong in Database SQL.
• Worked and extracted data from various database sources like mysql, Sql Server,
Postgre sql, Nosql(Hbase, mongoDB), different other Data Lake .
• Worked on Big Data platform using Sqoop, Flume, Hive, Map-reduce, Spark Streaming,
pyspark, MLlib, Spark sql, .
• Experience with file systems, databases, and data movement (ETL).
• Expertise in UNIX and Linux Shell environments using command line utilities.
• Good experience with cloud platforms such as AWS, Azure.
• Proficient in Python, experience building, and product ionizing end-to-end systems
• Knowledge of Django, Rest, Springboot and other web development tools
• Used Git hub, Bit bucket, Jira integration tools

TECHNICAL SKILLS

Exploratory Data Analysis: Univariate/Multivariate Outlier


detection, Missing value imputation, Histograms/Density
estimation
Supervised Learning: Linear/Logistic Regression, Lasso, Ridge,
Statistics/ML Elastic Nets, Decision Trees, Ensemble Methods, Random
Forests, Support Vector Machines, Gradient Boosting, LDA, XGB,
Deep Neural Networks, Bayesian Learning
Unsupervised Learning: Principal Component Analysis, K-means,
Time Series: ARIMA, Exponential smoothing, ARIMAX

Machine Learning lib/ Python: Pandas, NumPy, scikit-learn, SciPy, Statistical modeling,
MLlib
Deep Learning
TensorFlow, Keras, familiarity with Pytorch

Databases/ETL/Query MySQL, Ms SQL Server, Postgres, SQLAlchemy and Big Query


Visualization Power BI, Seaborn, Matplotlib
Big Data Tool Apache Spark, pyspark, Spark Streaming, Sqoop, Hadoop, YARN,
NLP Map Reduce, UNIX/Linux

Cloud Platform NLTK, Spacy, Regex, seq2seq DeepNLP, bag of words, Google
Dialog flow
AWS, Azure

PROFESSIONAL EXPERIENCE

People’s United Financial, Bridgeport, CT Feb 2018 - March 2020


Role: Data Scientist/Machine Learning

Description: Working on Risk Modelling Fraud detection focussed on Credit card fraud
detection modelling algorithms with Python and Apache Spark. Using Apache Kafka for
Real time processing. Have good experience working with deep learning neural networks
for the modelling. Working on on Text analytics NLP project for Email Parsing, Sentiment
Analysis.

Responsibilities:
• Implemented Machine Learning, Deep Learning Neural Networks algorithms using
TensorFlow Framework and designed Prediction Model using Data Mining Techniques
with help of Python, pySpark's Libraries such as NumPy, SciPy, MLlib, Matplotlib,
Pandas, scikit-learn.
• Used scikit-learn in modelling various classification, regression and clustering, ensemble,
deep Learning algorithms such as support vector machines, random forests, gradient
boosting, k-means, X-gboost, Ada boost, ANN, Auto encoders, SOM.
• Worked with Big Data Platform PySpark, Spark sql, MLlib. Extracted data from data Lake.
• Conducted Exploratry data analysis, feature engineering such as data cleaning, data
preparation, feature extraction, visualizing, and outlier detection.
• Finding insights from millions of customer chat and calls records for NLP project
• Implemented methods of NLP for sentimental analysis, dependency parsing, entity
recognition, lemmatization, stemming, tokenizing and POS tagging using spaCy, Gensim
and NLTK for understanding customer queries/interactions with the products.
• Rigorous use of Regex, Urllib2, beautifulSoup, NLTK, SpaCy for text parsing and sentiment
analysis.
• Reviewing business requirements and analyzing data sources.
• Developed predictive models for Sales and Finance teams using various ML and DL
algorithms.
• Created and presented executive dashboards to show the patterns & trends in the data
using PowerBI Desktop, query mining.
• Involved in developing and testing the SQL Scripts for report development, Power BI
reports, query editor, Dashboards and handled the performance issues effectively

Client: Walmart, Columbus, OH. July 2016 - August 2017


Role: Data Scientist/Machine Learning

Developed an algorithm that accurately predicts the demand of products and overall sales
prediction among multiple classes based on the historical sales data available on multiple
products. Further, Maintaining the right stock of products whose demand is high while
avoiding the loss of maintaining unnecessary products. Worked in AWS cloud platform
used Sagemaker for the machine learning jobs. Involved in Deep NLP chatbot project using
python.
Responsibilities:
• Gathering business requirement from client and approach formulation and design
methodology.
• Performed Data Cleaning, features scaling, features engineering using python
• Replacement of missing data and perform a proper EDA to understand the time series data
• Captured trends, seasonality patterns through time series models such as ARIMA, used lag
variables and sliding window techniques, feature engineered variables through iterations.
• Worked on databases such as Oracle .
• Involved working in Data science using Python 3.x on different data transformation and
validation techniques like Dimensionality reduction using Principal Component Analysis
(PCA) and A/B testing, Factor Analysis, testing and validation using ROC plot, K- fold cross
validation, statistical significance testing.
• Used Python 2.x/3.x to develop many other machine learning algorithms such as ARIMA,
ARIMAX HYBRID, used Keras, TensorFlow for deep learning implemented LSTM
modelling.
• Generated visualizations using Power BI(Dashboards) and Seaborn.
• Helped in Web App development
• Involved in NLP project using NLTK, se2seq model (special RNN model), google-Tensorflow,
LSTM, google Dialog flow for creation of rigorous Chatbot system.

Vertiv, Columbus, Ohio May 2015 – May 2016


Role: Data Analyst/Data Scientist

Description: Worked on Stock Prices Prediction using machine learning, risk analysis,
financial sentiment analysis. Worked on Big data processing with Hadoop(Sqoop, flume,
hive, Map Reduce, YARN) and creating Data Pipelines. Worked in Unix/Linux
environment.

Responsibilities:
• Assisting business by being able to deliver a machine learning project from beginning to
end, aggregating and exploring data, building and validating predictive models and
deploying completed models to deliver business impacts to the organization
• Performed Data Cleaning, features scaling, features engineering using pandas and NumPy
packages in python and build models.
• Created impact documents specifying changes introduced as part of the program and lead
the business process team.
• Worked on different data formats such as CSV, JSON, XML and performed Machine
Learning algorithms (scikit learn, statsmodel) in Python and tried Deep Learning
techniques such as LSTM.
• Work with big data consultants to analyze, extract, normalize and label relevant data using
Statistical modeling techniques like Logistic regression, decision trees, Support vector
machine, Random forest, Naive Bayes and neural networks
• Review business data for trends, patterns or casual analysis to assist in identifying model
drift and retraining models
• Derived data from different Financial API such as yahoo for data extraction and also text
analytics for retriving data like financial statements.
• Drafted financial modelling strategies and back testing model.
• Developed MapReduce/Diver java modules for machine learning & predictive analytics
• Rapid model creation in Python using pandas, NumPy, Scikit-Learn, Keras
• Extracting the source data from Oracle tables, MS SQL Server, sequential files and excel
sheets.
• Also performed SQL query for data analysis and integration Environment: Python, HTML5,
OLTP, Random Forests, OLAP, HDFS, JSON, MySQL, NumPy, Matplotlib, Spark

Dator Technologies Inc., Kathmandu, Nepal Jan 2013 - Dec 2013


Role: Data Analyst
Description: Performed data analysis and data profiling using complex SQL on various
sources systems including Oracle and MS SQL.
Responsibilities:
• Involved in defining the source to target data mappings, business rules, data definitions
• Involved in defining the business/transformation rules applied for sales and service data.
• Worked with project team representatives to ensure that logical and physical ER/Studio
data models were developed in line with corporate standards and guidelines.
• Define the list codes and code conversions between the source systems and the data mart.
• Responsible for defining the key identifiers for each mapping/interface
• Perform coding migration, database change management & data management through
the various stages of the development life cycle
• Perform and manage daily database maintenance, monitoring and performance tuning
tasks/jobs
• Implementation of Metadata Repository, Maintaining Data Quality, Data Clean-up
procedures, Transformations, Data Standards, Data Governance program, Scripts,
Stored Procedures, triggers and execution of test plans
• Remain knowledgeable in all areas of business operations to identify systems needs and
requirements.

EDUCATION
Ms Finance from University of Bridgeport (May 2019)
Bs-BBA from Franklin University, OHIO (2015)

You might also like