Anyone who stops learning is old, whether at twenty or eighty. Anyone who keeps learning stays young. - Henry Ford
Learning in the fields of cloud computing, AI infrastructure and generative AI technology. In cloud computing, I delve into the complexities of distributed systems, data storage and computing resources that are essential for scalable and efficient AI applications. The study of AI infrastructure focuses on understanding the underlying hardware and software architectures, including GPUs and neural network frameworks, that enable efficient training and deployment of AI models. The generative AI study offers an exploration into model capable of creating new content, learning about the potential applications and ethical considerations. This learning journey encompasses both theoretical knowledge and practical skills, emphasizing hands-on expereience with tools and platforms, the goal is to keep pace with technological advancements.
Foundational and core skills
Learning foundational skills is the first step of building a career in AI. There are a lot in AI domain, no one can do it in a short period of time. Cultivate the habit of learning a little bit every week, we can make significant progress.
Understand models, such as linear regression, logistic regression, neural networks, deciosn trees, clustering and anomaly detection.
Understand the concepts on how and why ML works, such as bias/variance, cost functions, regularization, optimization algorithms, and error analysis.
Know the basics of neural networks, practical skills for making them work, convolutional networks, sequence models, and transformers.
Use visualizations and other methods to systematically explore a dataset, it's particularly useful in data-centric AI development.
Write good software to implement complex AI systems, including programming fundamentals, data structures, algorithms, software deisgn, familiarity with Python, key libraries such as TensorFlow or Pytorch and Scikit-learn.
AI Agentic Design Patterns with AutoGen
Learn how to build and customize multi-agent systems, enabling agents to take on different roles and collaborate to accomplish complex tasks using AutoGen, a framework that enables development of LLM applications using multi-agents.
Gain hands-on experience with AutoGen’s core components and a solid understanding of agentic design patterns. You’ll be ready to effectively implement multi-agent systems in your workflows.
Use the AutoGen framework with any model via API call or locally within your own environment.
Tech stack: Basic Python, Agent, API, AutoGen.
Fundamentals of Deep Learning
This course is a instructor-led workshop from Nvidia with hands-on lab practices focusing on below key points.
An introduction to deep learning and neural networks.
Training neural networks, including aspects like learning rate, activation functions, and overcoming overfitting.
Exploring convolutional neural networks and their applications in computer vision.
The significance of data augmentation and deployment strategies for deep learning models.
Leveraging pre-trained models to accelerate development and enhance performance.
Advanced architectures, including recurrent neural networks, autoencoders, and generative adversarial networks.
Tech stack: GPU powerd cloud server, JupyterLab, TensorFlow, Keras
AI Python for Beginners
This course is for anyone curious about AI and programming with Python, from complete beginners learning to code for the first time to professionals seeking to boost productivity and learn how to properly integrate AI into their coding process.
Learn Python programming fundamentals and how to integrate AI tools for data manipulation, analysis, and visualization.
Discover how Python can be applied in various domains such as business, marketing, and journalism to solve real-world problems and enhance efficiency through practical applications.
Leverage AI assistants to debug code, explain concepts, and enhance your learning, mirroring real-world software development practices.
Tech stack: Basic Python, AI-Assisted Coding, API Interaction.
Designing and implementing a Microsoft Azure AI solution
This course focused on leveraging Microsoft Azure's artificial intelligence capabilities. This includes understanding Azure AI services and how to implement them effectively to solve complex business problems. The course would likely cover topics such as creating AI solutions using Azure Machine Learning, Azure Cognitive Services (like computer vision and natural language processing), and Azure Bot Service. It aims to provide practical skills for building, training, and deploying AI models, as well as integrating AI features into applications and services, using Microsoft's Azure cloud platform. Tech stack: Azure, Python, AI toolbox.
Generative AI with LLM
Gain foundational knowledge, practical skills, and a functional understanding of how generative AI works. Dive into the latest research on Gen AI to understand how companies are creating value with cutting-edge technology. Tech stack: AWS, Python, Model.
Introduction to AI in the Data Center
Learn AI use cases in different industries, the concepts of AI, Machine Learning (ML) and Deep Learning (DL), understand what a GPU is, the differences between a GPU and a CPU. You will learn about the software ecosystem that has allowed developers to make use of GPU computing for data science and considerations when deploying AI workloads on a data center on prem, in the cloud, on a hybrid model, or on a multi-cloud environment. Explore the requirements for multi-system AI clusters, storage and networking considerations for such deployments, and an overview of NVIDIA reference architectures, which provide best practices to design systems for AI workloads. Covers data center level considerations when deploying AI clusters, such as infrastructure provisioning and workload management, orchestration and job scheduling, tools for cluster management and monitoring, and power and cooling considerations for data center deployments. Lastly, you will learn about AI infrastructure offered by NVIDIA partners through the DGX-ready data center colocation program. Tech stack: GPUs, Nivida, CUDA.
Generative AI courses from DeepLearning.AI
Take your generative AI skills to the next level with short courses from DeepLearning.AI. Those short courses help me learn new skills, tools, and concepts efficiently. Check those out as they are vailable for free for a limited time.
ChatGPT Prompt engineering for devellopers, building systems with the ChatGPT API.
LangChain for LLM application development.
Finetuning LLMs, how diffusion models work.
How business thinkers can build AI pulgins with Semantic Kernel.
Pair programming with a LLm. Understanding and applying text embeddings with Vertex AI.
DevOps: Kubernetes course
Kubernetes, also known as K8s, is the most popular platform for container orchestration for automating deployment, scaling, and management of containerized applications.
Basics of Kubernetes, its architecture with master nodes, worker nodes, pods, and main components like API server, controller manager, scheduler and etcd.
The syntax and contents of K8s configuration file , which is used to create and configure components in a Kubernetes cluster.
Setup a K8s cluster locally with Docker desktop, learn to use Minikube and Kubectl command.
Perform a hands-on project to deploy a web application with mongoDB and local Kubernetes cluster.
Tech stack: Docker, Kubernetes, mongoDB, YAML.
Building Agentic RAG with LlamaIndex
Explore one of the most rapidly advancing applications of agentic AI, use LlamaIndex to start using agentic RAG, a framework designed to build research agents skilled in tool use, reasoning, and decision-making with your data.
Learn how to build an agent that can reason over your documents and answer complex questions.
Build a router agent that can help you with Q&A and summarization tasks, and extend it to handle passing arguments to this agent.
Design a research agent that handles multi-documents and learn about different ways to debug and control this agent.