Articles

Articles are a collaborative effort to provide a single canonical page on all topics relevant to the practice of radiology. As such, articles are written and continuously improved upon by countless contributing members. Our dedicated editors oversee each edit for accuracy and style. Find out more about articles.

91 results found
Article

METhodological RadiomICs Score (METRICS)

The METhodological RadiomICs Score (METRICS) is a 30-item quality evaluation tool for artificial intelligence (AI) and radiomics papers 1. It aims to assess and improve the quality of radiomics research. METRICS is endorsed by the European Society of Medical Imaging Informatics (EuSoMII). Overv...
Article

CheckList for EvaluAtion of Radiomics research (CLEAR)

The CheckList for Evaluation of Radiomics Research (CLEAR) is a 58-item reporting guideline designed specifically for radiomics. It aims to improve the quality of reporting in radiomics research 1. CLEAR is endorsed by the European Society of Radiology (ESR) and the European Society of Medical I...
Article

Large language models

Large language models are advanced artificial intelligence systems designed to understand and generate human-like text. These models are built using deep learning techniques and are trained on vast amounts of text data, such as books, articles, and websites. Large language models utilize algorit...
Article

Hebbian learning

Hebbian learning describes a type of activity-dependent modification of the strength of synaptic transmission at pre-existing synapses which plays a central role in the capacity of the brain to convert transient experiences into memory. According to Hebb et al 1, two cells or systems of cells th...
Article

Autoencoder

Autoencoders are an unsupervised learning technique in which artificial neural networks are used to learn to produce a compressed representation of the input data. Essentially, autoencoding is a data compression algorithm where the compression and decompression functions are learned automatical...
Article

Learning curve (machine learning)

A learning curve is a plot of the learning performance of a machine learning model (usually measured as loss or accuracy) over time (usually in a number of epochs). Learning curves are a widely used diagnostic tool in machine learning to get an overview of the learning and generalization behavi...
Article

Deep learning frameworks

Deep learning frameworks are instruments for training and validating deep neural networks, through high-level programming interfaces. Widely used deep learning frameworks include the libraries PyTorch, TensorFlow, and Keras. A programmer can use these libraries of higher functions to quickly de...
Article

Underfitting

Underfitting in statistical and machine learning modeling is the counterpart of overfitting. It happens when a model is not complex enough to accurately capture relationships between a dataset’s features and a target variable, i.e. the network is struggling with learning the patterns in the dat...
Article

ImageNet dataset

The ImageNet is an extensive image database that has been instrumental in advancing computer vision and deep learning research. It contains more than 14 million, hand-annotated images classified into more than 20,000 categories. In at least one million of the images, bounding boxes are also prov...
Article

Deep learning

Deep learning is a subset of machine learning based on multi-layered (a.k.a. “deep“) artificial neural networks. Their highly flexible architectures can learn directly from data (such as images, video or text) without the need of hand-coded rules and can increase their predictive accuracy when p...
Article

Generalisability

Generalisability in machine learning models represents how well the models can be adapted to new example datasets.  Evaluating generalisability of machine learning applications is crucial as this has profound implications for their clinical adaptability. Briefly, two main techniques are used fo...
Article

Information leakage

Information leakage is one of the common and important errors in data handling during all machine learning applications, including those in radiology. Briefly, it means the incomplete separation of the training, validation, and testing datasets, which can significantly change the apparent perfor...
Article

Explainable artificial intelligence

Explainable artificial intelligence usually refers to narrow artificial intelligence models made with methods that enable and enhance human understanding of how the models reached outputs in each case. Many older AI models, e.g. decision trees, were inherently understandable in terms of how they...
Article

Findable accessible interoperable reusable data principles (FAIR)

The FAIR (findable accessible interoperable reusable) data principles are a set of guidance on enhancing semantic machine interpretability of data, thereby improving its richness and quality. Since its inception, multiple international organizations have endorsed the application of FAIR principl...
Article

Federated learning

Federated learning, also known as distributed learning, is a technique that facilitates the creation of robust artificial intelligence models where data is trained on local devices (nodes) that then transfer weights to a central model. Models can potentially be trained using larger and/or more d...
Article

Ground truth

Ground truth is a term used in statistics and machine learning to refer to data assumed to be correct. Regarding the development of machine learning algorithms in radiology, the ground truth for image labeling is sometimes based on pathology or lab results while, in other cases, on the expert o...
Article

Hyperparameter (machine learning)

Hyperparameters are specific aspects of a machine learning algorithm that are chosen before the algorithm runs on data. These hyperparameters are model specific e.g. they would typically include the number of epochs for a deep learning model or the number of branches in a decision tree model. Th...
Article

Linear discriminant analysis

Linear discriminant analysis (LDA) is a type of algorithmic model employed in machine learning in order to classify data. Unlike some other now popular models, linear discriminant analysis has been used for decades in both AI for radiology 1 and many other biomedical applications. Linear discri...
Article

Dice similarity coefficient

The Dice similarity coefficient, also known as the Sørensen–Dice index or simply Dice coefficient, is a statistical tool which measures the similarity between two sets of data. This index has become arguably the most broadly used tool in the validation of image segmentation algorithms created wi...
Article

Semi-supervised learning (machine learning)

Semi-supervised learning is an approach to machine learning which uses some labeled data and some data without labels to train models. This approach can be useful to overcome the problem of insufficient quantities of labeled data. Some consider it to be a variation of supervised learning, whilst...

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.