24ht June 2021 10:00am -11:30am
Topic: Support Vector Machine Fundamental
Speaker: Dr. Manisha Thakkar (Dept CSE, MIT-WPU Pune)


Dr. Manisha Thakkar explained about the following:
SVM used for both Regression and Classification. Idea behind SVM is: Max-Margin classifier, Lagrangian Multipliers, Kernels, Complexity. Hyperplane is linear decision surface that splits the space into two parts. Hyperplane is a binary classifier. Quadratic Programming is an optimization problem where the objective function is quadratic, subject to linear constraints. If data is not linearly separable in the input space, then apply kernel to map data to a higher dimensional space where it is linearly separable. Popular kernels are: Linear kernel, Gaussian Kernel, Exponential kernel, Polynomial kernel, Hybrid kernel, Sigmoidal. Minimize of a function is addition of loss and penalty of a function. Loss measures error of fitting the data. Penalty penalizes complexity of the learned functions. Tuning parameters are: Kernel, Regularization (How much misclassification is tolerated), Gamma (how for the influence of a single training example reaches), Margin. Proteins are synthesized in the cytosol. Transported into different subcellular locations where they carry out their functions. Pros: Empirically achieve excellent results in high-dimensional data with large number of variables and small samples. Cons: SVM doesn’t directly provide probability estimates, these are calculated using cross validation. Applications of SVM are: Image-based analysis and classification tasks, Geo-spatial data-based application, Test-based application, Computational biology, Security-based application, Chaotic system controls.


24ht June 2021 11:45am -01:15pm
Topic: Markov Decision Process
Speaker: Dr. Jayshree Aher (Dept CSE, MIT-WPU Pune)


Dr. Jayshree Aher explained about the following:
Markov Decision Process (MDP) is a discrete stochastic control process. It Focus on Long-term Utility. MDP Components: S: State; A: Action; P: Transition function; R: Reward; γ: Discount factor. Some AI Capabilities are Generalized Learning; Reasoning; Problem Solving and it can Adapt; Reason; Provide solution. Episodic Tasks are the tasks that have a terminal state (end or final state). Continuous Tasks: These are the tasks that have no ends i.e., they don’t have any terminal state. Reinforcement Learning deals with the knowledge base on current state and in future prediction of next state with optimal way like their agent, environment and transition. MDP: Applications- robot path planning, travel route planning, Bank customer retention, manufacturing processes, Network switching & routing. Discounting: Discounting factor(Y)- determines how much importance is to be given to the immediate reward and future reward. Discounting Reward: Prefers solutions, compensates for uncertainty in available time.


24ht June 2021 02:30pm – 4:30pm
Topic: Practical session on SVM
Speaker: Dr. Manisha Thakkar (Dept CSE, MIT-WPU Pune) and Dr. Jayshree Aher (Dept CSE, MIT-WPU Pune)


Dr. Manisha Thakkar and Dr. Jayshree Aher explained about the following:
SVM (Support Vector Machine): is a supervised and linear Machine Learning algorithm most used for solving classification problems and is also referred to as Support Vector Classification. Scikit-learn is an open-source Python library that implements a range of machine learning, pre-processing, cross-validation and visualization algorithms using a unified interface. GA may be a time-consuming process but it never goes in a wrong direction as it is Robust in nature. Types of Kernels: Radial basis function (RBF), Linear splines kernel.
Practical implementation SVM-Cancer Dataset and to find the accuracy for each kernel by using different types of kernels