‘Sheld-on’ Massive Open Online Courses

Indoor living space improvement: Smart Habitat for the Elderly

The Sheldon MOOC Platform is already Available!

Due to the impossibility for the Sheldon members to arrange most of the networking activities during the period impacted by the COVID-19, the Management Committee of the Sheldon COST Action CA16226 agreed to support the launch of a MOOC (Massive Open Online Courses) platform that would work as a repository of educational content related to Sheldon objectives whether developed under the frame of the Sheldon Actions or under other means.

The Sheldon MOOC structure includes aggregated knowledge and learning materials for key topics related to Sheldon, primarily to Ambient Assisted Living. The materials are approved by the MOOC teams including content for areas of expertise relevant for the Action:

– Introduction to Ambient Assisted Living – AAL
– Ethical and usability aspects.
– Furniture and habitats from different disciplines: Health care, Psychology, Ergonomics, Construction, etc., and from the users: elderly, caregivers, etc.
– Innovative ICT solutions that will be integrated into Smart Support Furniture and habitat environments.
– Healthcare domain, Home Care Domain System, Smart Home, Health Monitoring and Assistance,
– Electrical engineering, electronic engineering, Information engineering: Sensors and sensor systems
– Mechanical engineering: Product design, ergonomics, mechanical engineering aspects of man-machine interfaces
– Assistive technologies. Overview on the state of the art. Challenges and perspectives.
– Selected Topics (User interfaces for older people, ADL recognition and smart home technologies, introduction to locomotion analysis)
– Aspects of the user centred design approach

Explainable Artificial Intelligence in Ambient Assisted Living

The main goal of this course is to introduce students to techniques that can be used to explain Artificial Intelligence models, with a specific focus on their application on models that analyze and classify data from AAL systems. It also aims to demonstrate the Python tools and libraries that can make this application easier and available to a wide variety of machine learning models.

Students taking this course are expected to be familiar with the basic working of machine learning models, including decision trees, random forests, logistic regression, linear and neural network regression and neural network classification.

After completing this course students will be able to understand some of the most widely used techniques in explainable AI, such as SHAP and LIME and know how to use Python to apply them to various machine learning models with different types of input data. Students will know how to understand the explanations given by the tools they’re using and what avenues they have to further improve the model based on them.

Topics covered in this course include the SHAP explainer and how it can be used to explain classification algorithms applied to tabular and image data, LIME and its usage in the explanation of classification of tabular data and Partial Dependence Plots for the explanation of classification and regression algorithms.

The tools that will be used in this course include the shap and lime Python libraries that contain a multitude of explainers for all machine learning and artificial intelligence models and visualizers for those explanations that can differ and are adapted to the type of input data used. Scikit-learn is used to compute the values for the Partial Dependence Plots and matplotlib is used to help visualize them.

These tools, along with the understanding of the working of the explanation techniques and the meaning of their output should equip students with the knowledge they need to both use their knowledge to improve their machine learning models and to extract new knowledge from these models using explainable AI.

Lecture 1: Explainable AI

This video serves as an introduction to explainable AI. It introduces the concept, the problems that arise when using black box models, the necessity for explanation and the advantages of using it.

Lecture 2: SHAP Values

This video introduces the concept of SHAP values. It explains what they mean and gives a step-by-step example of how they are calculated.

Lecture 3: SHAP Example 1

This video introduces the shap Python library and shows the first example of using SHAP values to explain tabular data.

Lecture 4: SHAP Example 2

This video shows how to explain tabular data, applied to an AAL dataset, using the shap Python library.

Lecture 5: SHAP Example 3

This is the third example showing the use of the shap Python library. In this video, the explanation of the classification of image data is shown

Lecture 6: SHAP Example 4

This video shows how to use the shap Python library to explain the classification made by a neural network classifier on tabular AAL data.

Lecture 7: LIME for Explainable AI

This video introduces the LIME explainer. It shows the working of the technique and what its explanations mean.

Lecture 8: LIME example 1

This video introduces the LIME Python library and shows the first example of using LIME to explain tabular data.

Lecture 9: LIME Example 2

This video shows how to use the SHAP Python library to explain the classification made by a random forest classifier on tabular AAL data

Lecture 10: Partial Dependence Plots and Individual Conditional

This video introduces the concept of Partial Dependence Plots and Individual Conditional Expectation curves. It explains their meaning, differences, and how they can be used to understand and explain various machine learning models.

Lecture 11: PDP and ICE Example 1

This video shows how scikit-learn can be used to compute the values needed for the PDP and ICE for a neural network regression model and how matplotlib can be used to visualize these plots and curves.

Lecture 12: PDP and ICE Example 2

This video shows another example of how PDP and ICE curves can be computed and visualized using scikit-learn and matplotlib, this time for a logistic regression model applied to an AAL dataset.

Lecture 13: PDP and ICE Example 3

This video shows how the PDP of two features can be computer and visualized.

Datasets and Pre-Processing in AAL

The main goal of this course is to introduce students to some of the most important and widely used preprocessing techniques, as well as the tools they can use to apply them to AAL datasets.

After completing this course, students will be able to clean up a given dataset and prepare it for further processing and analysis with other machine learning algorithms.

Students will be able to deal with missing, inconsistent and noisy data. They will learn how to encode categorical data and they’ll be introduced to feature selection and dimensionality reduction techniques.

The course also aims to provide an introduction to pandas and scikit-learn as tools that can be used to preprocess data using Python. The completion of this course should enable students to use these tools effectively to solve data quality issues and to make other necessary adjustments to a dataset of tabular data.

Topics covered in this course include opening a dataset in Python using pandas, dealing with missing data by removing the affected rows or filling in the missing value with an estimated one, dealing with outliers, using scikit-learn to encode categorical data as a number and to reduce the size of the dataset and its features, as well as to scale the data and make it more suitable for use in machine learning applications.

Lecture 1: Introduction to Ambient Assisted Living (AAL) – Sheldon

In this video, the basic concepts of AAL are introduced. An overview of the types of data that can be collected for AAL systems, as well as their sources and applications is also given

Lecture 2: AAL Datasets

This video introduces the datasets that will be used throughout the course. It shows how to read in a dataset using pandas and introduces the content and features of the used datasets.

Lecture 3: Handling missing data

In this video, the problem of missing data is shown, as well as techniques and tools that can be used to handle it. It shows how to remove samples with missing data or replace the missing values with an estimate using pandas and scikit-learn.

Lecture 4: Handling Outliers

This video deals with the problem of outliers and how to handle them.

Lecture 5: Encoding categorical data

This video shows two ways to encode categorical (non-numeric) data using scikit-learn and why it is necessary.

Lecture 6: Normalization

In this video an example of the problems that arise with some machine learning algorithms when using non-standard, normally distributed data is given. It also provides a solution to this by scaling the data using scikit-learn.

Lecture 7: Feature selection

This video shows how to select only some of the features in a large dataset when not all of them are needed.

Lecture 8: Dimensionality reduction

This video shows another way of lowering the complexity of a dataset, again using scikit-learn.

Lecture 9: Sampling

This video shows how to randomly select only some of the rows of a dataset.

Introduction to Python

Python is a general-purpose programming language gaining popularity as a data science programming language. Python is being used by businesses all around the world to extract insights from their data and achieve a competitive advantage. In this course you will learn the basics of Python, so you can then start performing Python for different domain-specific problems. This course is an introduction to Python’s fundamental ideas. Learn Python interactively and via the usage of a script. 

The requirements for this course are: basic knowledge in programming and basic knowledge in Object-Oriented Programing (in any other programming language)

The goals of this course are to introduce the students to how to create their first variables and learn about Python’s fundamental data types, to store, access, and manipulate data in lists (the first step toward efficiently working with huge amounts of data), to use functions, methods, and packages to efficiently leverage the code, to use NumPy and Pandas as fundamental Python packages for efficient practice of data science, to design a solution for short real-life tasks and code the solution in the Python programming language.

The course is organized into two major sections: Introductory exercises and Problem-solving. 

In the first section of this course the students will firstly get learn hot to install Python, Anaconda (tool for package management in Python),  PyCharm (an Integrated Development Environment (IDE) for Python), jupyter (for local interactive notebooks) as well as how to create a Google Golab notebook (notebooks that allow you to work on them with collaboration with other team members).

After that, the students will learn how to define expressions in programming languages and how to use them in Python, tips and tricks on how to be careful with the syntax when using expressions, declare variables and how is memory allocated for the variables, how to declare and initialize variables, use numeric variables and perform operations with them, use strings and perform operations with strings. 

One of the most important things in programming (regardless of the programming language) are functions, so in the following videos of this section, the students will learn how to write functions (also known as procedures or methods), call previously defined functions with or without arguments/parameters, use the built-in functions that exist in Python, use the built-in functions in Python on any kind of variables (including strings), use the type function to discover the type of a given variable that is used in your code.

 

Afterward, the students will get familiar with the different types of sequences in Python, learn how to use lists (create a list with range function, list comprehension, append and extend methods for lists, sorting method, indexing, and accessing the elements of a list, etc.), use tuples (create tuples, access elements of the tuple, the immutability of tuples), use dictionaries (create dictionaries, iterate keys and values sets from a dictionary, access values by keys), use sets (getting familiar with the property of sets to store unique values, intersection, union, and difference of two sets).

Finally, for the first section, the students will learn how to utilize control flow in their Python code with the help of if statements, if-elif statements, for loops, while loops, break and continue statements, handle the most familiar errors and exceptions in Python, use the library NumPy to store one or two-dimensional arrays, use the library Pandas to store tabular data in a data frame and perform operations with data frames. 

 

In the second part of this course (Problem-solving) the students will get familiar with the process of creation of algorithms first and coding the algorithms afterward in Python. 

The first task for the students will be a program that for a given amount of money, will print the minimum bills and coins needed to make the payment. The amount is integer read from standard input. The result should be printed in 9 lines, the number of bills or coins for each of them. The task should be solved in two ways (one solution with a lot of code and after that, the first solution should be refactored (simplified) into a more readable code) 

The second task will be a program that for a number read from standard input will check if the read number is palindrome or not, and will print out an appropriate message. The task should be solved with usage of loops and digits from the number itself, but also with usage of strings and some build-in functions for strings

The third task will be a program that for a date read from SI (in the format DD MM YYYY) will print on standard output a message YES if the date is correct and possible, or NO if the date is not correct.

The fourth task will be a program that prints the first ten numbers in descending order from the interval from 0 – n, that contain the digit m, k times. n,m, and k are integers read from standard input. This task should be solved again in two ways (using integers and using strings)

The fifth task will be a program that will print the percentage of mirror elements for each read list. For mirror elements in a list are considered the first and the last, the second and the second to the last, the third and the third to the last, etc.

The sixth task will be a program that will read words from a text file (each word is in a new line) and it will group the anagrams in the file. The groups that have more than 5 words in them should be printed.

This course provides a solid introduction to both fundamental programming ideas and the Python programming language. By the end of this course, you’ll know how to write in Python and be able to transfer your knowledge from this platform to your computer. After completing this course, you can proceed with other courses related to data science and artificial intelligence.

Lecture 1: Introduction to Python IDEs

Why we use Python as programme language?

Lecture 2: Expressions in Python

Introduction about Expressions in Python

Lecture 3: Python in Built Functions

Advantages of using funtions when programming in Python

Lecture 4: Control Flow

Lecture 5: Python Data Structures

This lecture explains the main data structures in Python: «list», «tuple», «dict» and «sect».

Lecture 6: Python errors and exception handling

Video about how errors and exception in Python can be managed.

Lecture 7: Numpy and Panda

Lecture about the most used packages in Python: Numpy and Panda

Lecture 8: Python Sequences

Sequences in Python when a list in Python is being implemented.

Lecture 9: Python Excersices 1

Set of videos where some programming problems are solved

Lecture 10: Python excersises 2

Set of videos where some programming problems are solved: This excersise allows to check if a number is a palindrome

Lecture 11: Python Exercises 3

Example for how to plot partial dependence plugs in individual conditional expectations curves in Python.

Lecture 12: Python excersises 4

How to write a program that prints the 1st ten numbers in descending order from the interval from 0-n that contains the digit m k times.

Lecture 13: Exercises 5

A number n is read from standard input. After that N lists of numbers are read as well.

For each of the read lists, print the percentaje of equal mirror lists int he list.

For mirror elements in a list are considered the first and the last, the second and the second to the last, the third and the third to the last, etc.

If the list has odd unmber of elemetns, the middle is considered for minor numbers as well.

Lecture 14: Exercise 6

Print the relative frequencies of all the characters found in the file text.txt. Ignore the case of the4 characters that are letters.

Lecture 15: Exercise 7

In the file «words.txt» there is a list of words (each of them in a new line). Write all the groups of anagrams that have 5 or more than 5 words.

Two words are considered for anagrams if they contain the same letters.

The groups should be prented in new line. The words in the groups should be sorted lexicographically in ascending order.

The order of the printing of the groups shoud be the same to dthe order of the words in the file.

Machine Learning in AAL with Python

Do you want to become a master in machine learning?.Then start today!, it would take you four hours to learn the basics of Machine learning. Imagine, for the time you spend watching youtube useless videos you can learn how they built their recommendation system. How do you get commercials for hotels after searching them on booking? Is this question still a mystery, then you need an  introduction to machine learning.

The field of Machine Learning is becoming applicable in almost every aspect of our lives. In this course, we will demonstrate how to apply various machine learning models to detect whether a sensor measurement is anomalous. Before training the models for the use case scenario, we will provide a short introduction to the field of AI and ML, data visualization, data preprocessing, and ML model selection. At the end, we will demonstrate how to evaluate your ML models and select the best model for your use case scenario. What do you need to have as background knowledge? 

Nothing special just: 

  • Solid knowledge in Python 
  • Basic knowledge in pandas library 
  • Basic knowledge of the goals and the idea behind AI and ML
  • Of course, as the most important desire for (Machine) Learning

In this course you will get answers to all the questions popping out on the Internet: What is machine learning as a branch of Artificial Intelligence?Then you will look at all different branches of machine learning, and nothing of the theory without practical use. In 20 minutes videos you will learn the whole process of machine learning from scratch  and then how these models are tasted. In that need we will introduce you to the basic methods for ML models evaluation. If you know all these things and still get miserable results then you need to do data preprocessing. The lesson of How to preprocess your data using techniques of normalization, filling missing values, labeling of categorical data will help you improve your model. Together with the data preprocessing we will take a look at how to provide descriptive visualizations for your data and wow to split your dataset into train and test datasets in order to stay balanced. Than of course the course will teach you How to train several supervised learning ML models and predict data with them and if the data has no output we come with the solution using clustering of the data with unsupervised learning ML techniques. In the end we will teach you how to choose the best model.

Lecture 1: Introduction to Machine Learning

In the first video you will get to know about the basics of Machine Learning and its use.

Lecture 2: Data preprocessing & visualization

This video is for data preprocessing and visualization, where you learn about the data features and how important part takes this in the future of the model.

Lecture 3: Training and Testing Data

Video for training and testing dataset and ways to split the data

Lecture 4: Logistic Regression

In this video you will learn about Logistic Regression.

Lecture 5: Support vector machines

This video detaily explains the Support Vector Machine model.

Lecture 6: K-nearest neighbors alghorithm

KNN – Classification or K Nearest Neighbors

Lecture 7: Naive Bayes classifier

In this video we talk about Naive Bayes classification and how with the use of Bayesian formulas a model of machine learning appears

Lecture 8: Decision Trees

Lecture about decision drees and how hyperparameters can be added that can change the decision of the trees branching.

Lecture 9: Neural Networks

In this video take a look at simple neural networks or Feet Forward networks where different activation functions, optimizers, epochs or beach sizes can change the loss of the training process. The Neural Networks are such a trend today based on their ability to be customized and making the whole process go training a black box.

Lecture 10: GXBoost

In this video you will take a look at how boosting technique works, we will keep on the XGBoost model where you will learn about the hyper parameters of it. At the end of the video of this group of classification models you will learn Auto Machine Learning with the help of TPOT. The Auto ML makes embedded models in order to find the best result.But if you do not have any output data, then don’t worry we are covering that lesson too.

Lecture 11: AutoML with TPOT

In this video you will learn Auto Machine Learning with the help of TPOT

Lecture 12: Clustering

This lecture shows the clustering models and how the data can be grouped using only their features.

Neural Networks for AAL

In recent years, neural networks have surpassed all previous approaches for classifying data in terms of accuracy, be it tabular data, images, or videos, and many other machine learning problems.

The goal of this course is to introduce students to Neural Networks and how they can be used to classify complex data, including tabular data with a large number of attributes and time series data, with a specific focus on their application to problems that involve the analysis and classification of data from AAL systems. It also aims to demonstrate the Python tools and libraries for building and applying Neural Network models.

Students taking this course are expected to have some basic knowledge of other machine learning techniques.

After completing this course students will be able to build neural networks in Python, know what options they have for network architectures and parameters and how to select the best combination. They will know what types of neural networks are suited to which problems and how to classify data using an existing neural network.

Topics covered in this course include feedforward, as well as more advanced neural networks, such as recurrent and convolutional networks. The tools that will be used in this course include the scikit-learn and keras Python libraries.

Lecture 1: Introduction, part 1

Lecture 2: Introduction, part 2

Lecture 3: Introduction, part 3

Lecture 4: Introduction, part 4

Lecture 5: Introduction, part 5

Lecture 6: MLP Classifier in scikit learn

Lecture 7: Using MLP Classifier

Lecture 8: MLP Classifier with different parameters

This video shows how scikit-learn can be used to select the best parameters for the MLPClassifier.

Lecture 9: MLP Classifier Grid Search

Lecture 10: Intro to Keras

Another tool that can be used to build neural networks in Python is keras. This library is introduced in this  video. Students can learn how to build multi-layer perceptrons in keras and what options exist for the various parameters.

Lecture 11: Keras feedfoward example #1

Lecture 12: Keras feedforward example 2

Lecture 13: Advanced neural networks

This part of the course focuses on more advanced neural networks. Here, neural networks with recurrent and convolutional layers and their possible applications are introduced. 

Lecture 14: Keras autoencoder example

We look at an example of anomaly detection for time series data using these advanced types of neural networks in keras.

Lecture 15: Keras – Future directions

Data Visualization

Your data is chaotic and you don’t know what to do with it? This course is for you.

After completing the course you will be able to tell a story about your data with the help of visualizations available in python. All unanswered data questions will now become a myth, because with simple and fast functions you will notice the features of the dataset.

Using this approach you will be able to create a complete picture of your problem, which will be easily solved in the future. This is exactly the purpose of the visualization, where the task is to find out novelty about the data.

This course will focus on visualizations using the python programming language, as one of the most used languages in the field of Data analytics, processing and visualization. 

The requirements for taking this course is basic knowledge of python or data analytics in other languages.

Lecture 1: Introduction

 Through this video you will learn about the importance of data visualization and how it can help when it comes to big data. 

Lecture 2: Big data visualizations

 Through this video you will learn about the importance of data visualization and how it can help when it comes to big data. 

Lecture 3: Pandas

Video about the basics for the pandas package in python and introduction to the analysis with the use-case for visualizations. 

Lecture 4: Matplolib

Video that features the matplotlib library along with practical examples. 

Lecture 5: Seaborn

Introduction to the Seaborn package

Lecture 6: Plotly

Introduction to the Plotly package

Lecture 7: Advanced visualizations, part 1

Video about advanced visualization with plotly. At the end you will gain knowledge on how to find out important information about your data. Along with how the visualizations can be interpreted and thus to discover innovations in the datasets analyzed or to choose models that will solve the problems of machine learning.

Lecture 8: Advanced visualization, Part 2

Video about advanced visualization with plotly. At the end you will gain knowledge on how to find out important information about your data. Along with how the visualizations can be interpreted and thus to discover innovations in the datasets analyzed or to choose models that will solve the problems of machine learning.

Lecture 9: Advanced visualizations part 3

Video about advanced visualization with plotly. At the end you will gain knowledge on how to find out important information about your data. Along with how the visualizations can be interpreted and thus to discover innovations in the datasets analyzed or to choose models that will solve the problems of machine learning.

Lecture 10: Advanced visualization, part 4

The next few videos about advanced visualization with plotly. At the end you will gain knowledge on how to find out important information about your data. Along with how the visualizations can be interpreted and thus to discover innovations in the datasets analyzed or to choose models that will solve the problems of machine learning.

Stream processing in Apache Flink for AAL

Do you have a lot of sensor data that you want to consume and process in real-time? Then, Apache Flink is the real tool for you and this course is a great introduction to the framework. 

After the completion of this course, you will be able to perform different types of data analytics tasks on your data streams in real-time. The results of the analytics can be consumed in real-time as well. You will be able to create your own Apache Flink application and deploy it on a cluster, as well as connect that application with different types of data sources and sinks in the future.

The programming language that is used in this course is Java (concretely Java 8 with its Streams API). Another requirement for this course is basic knowledge in Maven – a build automation tool. 

Lecture 1: Introduction

Introduction to big data. We define big data and the value that it can provide. We also give a short introduction to all frameworks that exist to process big data and later we focus on Apache Flink as a framework for data stream processing.

Lecture 2: Installing Flink with Docker

Manual video on how to install Apache Flink locally and instructions for the creation of a Flink cluster with the usage of Docker.

Lecture 3: First Flink app

In this video we are creating our first Apache Flink application with the help of the Maven quickstart script and we demonstrate the anatomy of a Flink application.

Lecture 4: Creating test data stream

In this video we are generating a simulation of a real-time data stream of sensor data which will be later used in the other videos for solving tasks with operators in Apache Flink.

Lecture 5: Flink operators part 1

In this video we are defining all of the relevant Apache Flink operators and we apply them to a concrete data analytics task related to the data stream created in the fourth video.

Lecture 6: Flink operators part 2

In this video we are defining all of the relevant Apache Flink operators and we apply them to a concrete data analytics task related to the data stream created in the fourth video.

Lecture 7: Windowing

In this video we demonstrate how we can use the concept of time windowing of the data for real-time data analytics tasks. We are looking into data aggregations with the usage of tumbling and sliding windows.

Machine Vision in AAL

Almost every modern AAL system uses some imaging or video based sensing equipment. Processing the signals from cameras is not an easy task. 

OpenCV is one of the largest and most widely used Computer Vision programming libraries. It contains many essential tools for image and video processing that can be applied to AAL systems.

This course is intended to give a practical overview of the processes in machine vision and the tools from OpenCV that can be used for building machine vision based systems.

The main goal of this course is to give an introduction to Machine Vision in a practical way using the python programming language and the OpenCV library. Each video contains description and practical examples using OpenCV tools and functions and will cover basic image processing operations and some advanced tools for detecting features in images and performing image retrieval and recognition tasks.

Lecture 1: Introduction

Introduction to OpenCV and to Machine Vision in general.

Lecture 2: Basic operations with images

 Overview of some basic operations with images.

Lecture 3: Adaptive thresholding and filters

This video describes thresholding and gives an overview of image filters, how to build them and how to use them.

Lecture 4: Line and circle detection

Introduction to line and circle detection using OpenCV.

Lecture 5: Advanced operations with images

This video describes contour detection, feature detection and image histograms and concludes this series.

Do you have a question or suggestion?
You can send us an email

I accept the terms and conditions Read it

I want to receive information about Sheld-on