Key Responsibilities
Design and build batch/real-time data warehouses to support overseas e-commerce growth
Develop efficient ETL pipelines to optimize data processing performance and ensure data quality/stability
Build unified data middleware layer to reduce business data development costs and improve service reusability
Collaborate with business teams to identify core metrics and data requirements, delivering actionable data solutions
Discover data insights through collaboration with business owner
Participate in AI-driven efficiency enhancement initiatives, collaborating on machine learning algorithm development, feature engineering, and data processing workflows
Essential Qualifications:
Proficiency in Korean, at the same time Chinese or English at business communication level for international team collaboration
3+ years of data engineering experience with proven track record in data platform/data warehouse projects
Proficient in Hadoop ecosystem (Hive, Kafka, Spark, Flink), SQL, and at least one programming language (Python/Java/Scala)
Solid understanding of data warehouse modeling (dimensional modeling, star/snowflake schemas) and ETL performance optimization
Strong cross-department collaboration skills to translate business requirements into technical solutions
Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related fields
Preferred Qualifications: Experience with AI model development workflows and hands-on experience in machine learning/deep learning projects