Articles

Articles are a collaborative effort to provide a single canonical page on all topics relevant to the practice of radiology. As such, articles are written and continuously improved upon by countless contributing members. Our dedicated editors oversee each edit for accuracy and style. Find out more about articles.

90 results found
Article

CheckList for EvaluAtion of Radiomics research (CLEAR)

The CheckList for Evaluation of Radiomics Research (CLEAR) is a 58-item reporting guideline designed specifically for radiomics. It aims to improve the quality of reporting in radiomics research 1. CLEAR is endorsed by the European Society of Radiology (ESR) and the European Society of Medical I...
Article

Artificial intelligence

Artificial intelligence (AI) has been defined by some as the "branch of computer science dealing with the simulation of intelligent behavior in computers" 1, however, the precise definition is a matter of debate among experts. An alternative definition is the branch of computer science dedicat...
Article

Image normalization

Image normalization is a process, often used in the preparation of data sets for artificial intelligence (AI), in which multiple images are put into a common statistical distribution in terms of size and pixel values; however, a single image can also be normalized within itself. The process usua...
Article

Segmentation

Segmentation, in the context of informatics for radiology, refers to the delineation of areas of interest in imaging in terms of pixels or voxels. Segmentation is often accomplished by computerized algorithms that vary in complexity from simply selecting pixels of similar values in proximity t...
Article

Imaging data sets (artificial intelligence)

The aggregation of an imaging data set is a critical step in building artificial intelligence (AI) for radiology. Imaging data sets are used in various ways including training and/or testing algorithms. Many data sets for building convolutional neural networks for image identification involve at...
Article

Explainable artificial intelligence

Explainable artificial intelligence usually refers to narrow artificial intelligence models made with methods that enable and enhance human understanding of how the models reached outputs in each case. Many older AI models, e.g. decision trees, were inherently understandable in terms of how they...
Article

Bayes' theorem

Bayes' theorem, also known as Bayes' rule or Bayes' law, is a theorem in statistics that describes the probability of one event or condition as it relates to another known event or condition. Mathematically, the theory can be expressed as follows: P(A|B) = (P(B|A) x P(A) )/P(B), where given that...
Article

Convolutional neural network

A convolutional neural network (CNN) is a particular implementation of a neural network used in deep learning that exclusively processes array data such as images, and is thus frequently used in machine learning applications targeted at medical images 1. Architecture A convolutional neural net...
Article

ImageNet dataset

The ImageNet is an extensive image database that has been instrumental in advancing computer vision and deep learning research. It contains more than 14 million, hand-annotated images classified into more than 20,000 categories. In at least one million of the images, bounding boxes are also prov...
Article

Large language models

Large language models are advanced artificial intelligence systems designed to understand and generate human-like text. These models are built using deep learning techniques and are trained on vast amounts of text data, such as books, articles, and websites. Large language models utilize algorit...
Article

Natural language processing

Natural language processing (NLP) is an area of active research in artificial intelligence concerned with human languages. Natural language processing programs use human written text or human speech as data for analysis. The goals of natural language processing programs can vary from generating ...
Article

Hebbian learning

Hebbian learning describes a type of activity-dependent modification of the strength of synaptic transmission at pre-existing synapses which plays a central role in the capacity of the brain to convert transient experiences into memory. According to Hebb et al 1, two cells or systems of cells th...
Article

Machine learning

Machine learning is a specific practical application of computer science and mathematics that allows computers to extrapolate information based on observed patterns without explicit programming. A defining characteristic of machine learning programs is the improved performance at tasks such as c...
Article

Radiomics

Radiomics (as applied to radiology) is a field of medical study that aims to extract a large number of quantitative features from medical images using data characterization algorithms. The data is assessed for improved decision support. It has the potential to uncover disease characteristics tha...
Article

Convolution

Convolution is a mathematical concept that implies the product of two functions. In practical terms for radiology, convolution implies the application of a mathematical operation to a signal such that a different signal is produced. Convolutions are applied in image processing for CTs and MRIs. ...
Article

Bayes' factor

A Bayes' factor is a number that quantifies the relative likelihood of two models or hypotheses to each other if made into a ratio e.g. if two models are equally likely based on the prior evidence ( or there is no prior evidence) then the Bayes factor would be one. Such factors have several use...
Article

Selection bias

Selection bias is a type of bias created when the data sampled is not representative of the data of the population or group that a study or model aims to make a prediction about. Selection bias is the result of systematic errors in data selection and collection. Practically-speaking selection bi...
Article

Automation bias

Automation bias is a form of cognitive bias occurring when humans overvalue information produced by an automated, usually computerized, system. Users of automated systems can fail to understand or ignore illogical or incorrect information produced by computer systems. Computer programs may crea...
Article

Information leakage

Information leakage is one of the common and important errors in data handling during all machine learning applications, including those in radiology. Briefly, it means the incomplete separation of the training, validation, and testing datasets, which can significantly change the apparent perfor...
Article

Dice similarity coefficient

The Dice similarity coefficient, also known as the Sørensen–Dice index or simply Dice coefficient, is a statistical tool which measures the similarity between two sets of data. This index has become arguably the most broadly used tool in the validation of image segmentation algorithms created wi...

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.