Articles

Articles are a collaborative effort to provide a single canonical page on all topics relevant to the practice of radiology. As such, articles are written and continuously improved upon by countless contributing members. Our dedicated editors oversee each edit for accuracy and style. Find out more about articles.

91 results found
Article

Bayes' factor

A Bayes' factor is a number that quantifies the relative likelihood of two models or hypotheses to each other if made into a ratio e.g. if two models are equally likely based on the prior evidence ( or there is no prior evidence) then the Bayes factor would be one. Such factors have several use...
Article

Selection bias

Selection bias is a type of bias created when the data sampled is not representative of the data of the population or group that a study or model aims to make a prediction about. Selection bias is the result of systematic errors in data selection and collection. Practically-speaking selection bi...
Article

Automation bias

Automation bias is a form of cognitive bias occurring when humans overvalue information produced by an automated, usually computerized, system. Users of automated systems can fail to understand or ignore illogical or incorrect information produced by computer systems. Computer programs may crea...
Article

Information leakage

Information leakage is one of the common and important errors in data handling during all machine learning applications, including those in radiology. Briefly, it means the incomplete separation of the training, validation, and testing datasets, which can significantly change the apparent perfor...
Article

Dice similarity coefficient

The Dice similarity coefficient, also known as the Sørensen–Dice index or simply Dice coefficient, is a statistical tool which measures the similarity between two sets of data. This index has become arguably the most broadly used tool in the validation of image segmentation algorithms created wi...
Article

Boosting

Boosting is an ensemble technique that creates increasingly complex algorithms from building blocks of relatively simple decision rules for binary classification tasks. This is achieved by sequentially training new models (or 'weak' learners) which focus on examples that were classified incorre...
Article

Transfer learning

The concept of transfer learning in artificial neural networks is taking knowledge acquired from training on one particular domain and applying it in learning a separate task. In recent years, a well-established paradigm has been to pre-train models using large-scale data (e.g., ImageNet) and t...
Article

Activation function

In neural networks, activation functions perform a transformation on a weighted sum of inputs plus biases to a neuron in order to compute its output. Using a biological analogy, the activation function determines the “firing rate” of a neuron in response to an input or stimulus. These functions...
Article

Cross entropy

Cross entropy is a measure of the degree of inequality between two probability distributions. In the context of supervised learning, one of these distributions represents the “true” label for a training example, where the correct responses are assigned a value of 100%. Machine learning If p(x)...
Article

Curse of dimensionality

The curse of dimensionality can refer to a number of phenomenon related to high-dimensional data in several fields. In terms of machine learning for radiology, it generally refers to the phenomenon that as the number of image features employed to train an algorithm increases there is a geometric...
Article

Cybersecurity

Cybersecurity is the protection of digital data, software and hardware from risks including attacks or other problems related to their integrity and/or data confidentiality. Cybersecurity may utilize many different types of tools and protocols including encryption, firewalls and other infrastruc...
Article

Autoencoder

Autoencoders are an unsupervised learning technique in which artificial neural networks are used to learn to produce a compressed representation of the input data. Essentially, autoencoding is a data compression algorithm where the compression and decompression functions are learned automatical...
Article

Deep learning frameworks

Deep learning frameworks are instruments for training and validating deep neural networks, through high-level programming interfaces. Widely used deep learning frameworks include the libraries PyTorch, TensorFlow, and Keras. A programmer can use these libraries of higher functions to quickly de...
Article

Deep learning

Deep learning is a subset of machine learning based on multi-layered (a.k.a. “deep“) artificial neural networks. Their highly flexible architectures can learn directly from data (such as images, video or text) without the need of hand-coded rules and can increase their predictive accuracy when p...
Article

Unsupervised learning (machine learning)

Unsupervised learning is one of the main types of algorithms used in machine learning.  Unsupervised learning algorithms are used on datasets where output labels are not provided. Hence, instead of trying to predict a particular output for each input, these algorithms attempt to discover the un...
Article

Single linear regression

Single linear regression, also known as simple linear regression, in statistics, is a technique that maps a relationship between one independent and one dependent variable into a first-degree polynomial. Linear regression is the simplest example of curve fitting, a type of mathematical problem i...
Article

Learning curve (machine learning)

A learning curve is a plot of the learning performance of a machine learning model (usually measured as loss or accuracy) over time (usually in a number of epochs). Learning curves are a widely used diagnostic tool in machine learning to get an overview of the learning and generalization behavi...
Article

Ground truth

Ground truth is a term used in statistics and machine learning to refer to data assumed to be correct. Regarding the development of machine learning algorithms in radiology, the ground truth for image labeling is sometimes based on pathology or lab results while, in other cases, on the expert o...
Article

Heat map

Heat maps are visual representations of data in matrices with colors. Two dimensions of the data are captured by the location of a point (i.e., a map) and a third dimension is represented by the color of the point (i.e., the value). Some nuclear medicine studies are technically examples of heat...
Article

Overfitting

Overfitting is a problem in machine learning that introduces errors based on noise and meaningless data into prediction or classification. Overfitting tends to happen in cases where training data sets are either of insufficient size or training data sets include parameters and/or unrelated featu...

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.