/** * @author Shubham Kadam * @role Data Analyst & Developer */ public DataAnalyst extends Developer { // Building scalable systems & turning data into insights private String name = "Shubham Kadam"; private String[ ] skills = {"Python", "SQL", "R", "Power BI", "Tableau", "Java", "Web Technologies"}; private boolean isAvailable = true; public void buildDashboard() { System.out.println("Crafting interactive dashboards..."); } public void tellStory() { System.out.println("Transforming data into decisive stories!"); } }
ABOUT ME
Data Analyst & Developer
I’m a Data Analyst & Developer passionate about transforming raw data into actionable insights that guide smarter decisions.
My expertise spans the entire analytics lifecycle—collecting and cleaning data, building statistical and predictive models, designing impactful visualizations, and delivering insights that drive results.
Beyond analytics, I code. I create interactive dashboards, engineer scalable data pipelines, and build reliable Java applications that integrate seamlessly with databases and web platforms.
What sets me apart is the combination: analytical precision fused with engineering discipline, resulting in solutions that are intelligent, scalable, and built to last.
Dashboards
Built KPI dashboards where clarity meets speed, shortening reporting cycles by 40%.
Pipelines
Streamlined ETL workflows with PySpark to handle 10M+ rows efficiently
Visualization
Created intuitive visualizations that made complex datasets easy to understand.
Java Apps
Built secure, user-friendly desktop tools with Swing/JDBC integrated with PostgreSQL
TECHNICAL PROFICIENCY
Skills & Technologies
A comprehensive toolkit built through hands-on experience with modern data technologies and development best practices.
PORTFOLIO
Featured Projects
A selection of projects showcasing my skills in data analysis, visualization, and software development.
Processed 100k+ EV records, improved reliability by 30%, and forecasted Tesla registrations with 85% accuracy. Built dashboards to highlight adoption trends.
Analyzed Census 2011 data to identify regional density patterns. Delivered clear visual reports for non‑technical stakeholders.
Created NeuraSG, an offline chatbot built with HTML, CSS, and JavaScript — delivering fast, accessible, and engaging user experiences without external APIs.
Developed secure role‑based portal with real‑time tracking, reducing manual entry time by 40% and improving faculty adoption.
Built and deployed portfolio platform with integrated compilers, attracting 200+ monthly visitors and supporting community learning.
Enhanced the SWIS Foundation website using React, JavaScript, and TypeScript, improving user engagement and streamlining information accessibility for visitors.
CERTIFICATIONS & ACHIEVEMENTS
Milestones & Recognitions
Highlights of certifications and achievements, showcasing growth, learning, and impactful participation.
CREATE DATABASE PortfolioDB; USE PortfolioDB; CREATE TABLE Certifications ( id INT PRIMARY KEY, title VARCHAR(100), provider VARCHAR(100) ); INSERT INTO Certifications (id, title, provider) VALUES (1, 'Data Science with Python', 'Internshala'), (2, 'Software Development BootCamp', 'Floydee Foundation'), (3, 'Introduction to Generative AI Studio', 'SimpliLearn') ; FLUSH TABLES WITH READ LOCK; UNLOCK TABLES; LOCK TABLES Certifications WRITE; UPDATE Certifications SET provider = 'SimpliLearn / SkillUp', title = 'Introduction to Generative AI Studio (Updated)' WHERE id = 3; UNLOCK TABLES; LOCK TABLES Certifications READ; SELECT * FROM Certifications;
| ID | Title | Provider |
|---|---|---|
| 1 | Data Science with Python | Internshala |
| 2 | Software Development BootCamp | Floydee Foundation |
| 3 | Introduction to Generative AI Studio (Updated) | SimpliLearn / SkillUp |
from pyspark.sql import SparkSession spark = SparkSession.builder.appName("PortfolioAchievements").getOrCreate() data = [ (1, "Cleared the Tata Crucible Quiz Prelim Round-1 (Online Mode)"), (2, "Worked with Web development NGO for free as a volunteer"), (3, "Participated in the SQL query firing workshop") ] columns = ["id", "achievement"] achievements_df = spark.createDataFrame(data, columns) rows = achievements_df.collect() html_output = "<table border='1' cellspacing='0' cellpadding='8'>" html_output += "<thead><tr><th>id</th><th>achievement</th></tr></thead><tbody>" for row in rows: html_output += f"<tr><td>{row['id']}</td><td>{row['achievement']}</td></tr>" html_output += "</tbody></table>" with open("achievements.html", "w", encoding="utf-8") as f: f.write(html_output) print(html_output)
| id | achievement |
|---|---|
| 1 | Cleared the Tata Crucible Quiz Prelim Round-1 (Online Mode) |
| 2 | Worked with Web development NGO for free as a volunteer |
| 3 | Participated in the SQL query firing workshop |
GET IN TOUCH
Let's work together
Have a project in mind or want to collaborate? I'd love to hear from you.
// Current status private boolean isAvailable = true; private String preferredWork = "Remote / Hybrid";
Smart insights, seamless conversations.