Cloud-AI: Serverless ML Pipeline

Overview A serverless machine learning inference pipeline built on AWS infrastructure with a Telegram Bot interface for on-demand model predictions. Technical Stack Cloud: AWS Lambda, API Gateway, S3, DynamoDB Infrastructure: Terraform IaC for reproducible deployments Development: LocalStack for local testing Runtime: Dockerized Python for portable ML deployments Interface: Telegram Bot API for user interaction Key Features Serverless Architecture: Pay-per-use model with AWS Lambda for cost-efficient inference Infrastructure as Code: Version-controlled, reproducible cloud infrastructure with Terraform State Management: DynamoDB for lightweight request logging and state management Portable Deployments: Docker containers for consistent ML model execution User-Friendly Interface: Telegram bot for easy, on-demand model inference requests Architecture User sends request via Telegram Bot API Gateway receives and routes the request Lambda function processes input and runs inference Results stored in DynamoDB and returned to user S3 for model artifacts and data storage Links GitHub Repository

November 1, 2024 · 1 min · Mohammadreza Hendiani

DS-Toolbox: Statistical & Modeling Toolkit

Overview A modular toolkit for data science workflows, providing reusable functions for statistical analysis, simulation, visualization, and machine learning. Technical Stack Languages: Python, R Libraries: NumPy, Pandas, scikit-learn, Matplotlib Documentation: Comprehensive examples on GitHub Features Statistical Analysis: Modular functions for common statistical operations Simulation Tools: Functions for data simulation and Monte Carlo methods Visualization: Easy-to-use plotting utilities ML Tools: Regression, classification, and clustering implementations for rapid prototyping Documentation: Comprehensive examples and use cases Use Cases Rapid prototyping of ML models Statistical analysis workflows Educational demonstrations Data exploration and visualization Links GitHub Repository

May 1, 2024 · 1 min · Mohammadreza Hendiani

Fedora Package Maintenance & AI-ML SIG

Overview Active contributions to the Fedora Linux ecosystem as a package maintainer and AI-ML Special Interest Group member since May 2024. Role & Responsibilities Package Maintainer: Maintaining and updating packages in Fedora repositories AI-ML SIG Member: Collaborating on AI/ML tooling for the Fedora ecosystem Collaboration: Working with 15+ global developers to accelerate AI/ML tool availability Key Packages llama-cpp Efficient LLM inference implementation in C/C++. Collaborating with Tom Rix (AMD) on packaging and maintenance. ...

May 1, 2024 · 1 min · Mohammadreza Hendiani

Llama.cpp & Whisper.cpp Contributions

Overview Contributions to two popular C/C++ AI model implementations: Llama.cpp for large language models and Whisper.cpp for speech recognition. Llama.cpp Llama.cpp is a high-performance C/C++ implementation for running LLaMA models efficiently on consumer hardware. Contributions Helped improve build system compatibility Documentation improvements Bug fixes and testing Whisper.cpp Whisper.cpp is a C/C++ port of OpenAI’s Whisper automatic speech recognition model. Contributions Build system improvements Documentation enhancements Cross-platform compatibility fixes Impact These projects enable running state-of-the-art AI models locally without requiring expensive GPU hardware or cloud services, democratizing access to AI technology. ...

March 1, 2024 · 1 min · Mohammadreza Hendiani