Tech articles - DXOMARK https://www.dxomark.com/category/tech-articles/ The leading source of independent audio, display, battery and image quality measurements and ratings for smartphone, camera, lens, wireless speaker and laptop since 2008. Mon, 20 May 2024 13:42:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.dxomark.com/wp-content/uploads/2019/09/logo-o-transparent-150x150.png Tech articles - DXOMARK https://www.dxomark.com/category/tech-articles/ 32 32 DXOMARK’s Eye Comfort Label: A guide to better screen time https://www.dxomark.com/eye-comfort-label/ https://www.dxomark.com/eye-comfort-label/#respond Fri, 17 May 2024 11:24:58 +0000 https://www.dxomark.com/?p=172706 On April 25th, DXOMARK introduced a new display protocol. Our experts have streamlined the testing process from six to four essential attributes: readability, video, color, and touch. This update also includes some exciting new features: new HDR video quality targets; updated color targets to reflect geographic color mode preferences; expanded test conditions to include use [...]

The post DXOMARK’s Eye Comfort Label: A guide to better screen time appeared first on DXOMARK.

]]>

On April 25th, DXOMARK introduced a new display protocol. Our experts have streamlined the testing process from six to four essential attributes: readability, video, color, and touch.

This update also includes some exciting new features:

  • new HDR video quality targets;
  • updated color targets to reflect geographic color mode preferences;
  • expanded test conditions to include use cases with brighter lighting.

In response to growing user concerns about the impact of screen time on health, particularly sleep, DXOMARK has been working diligently to provide a solution.

Discover the Eye Comfort Label, a major development from our team. This label is designed to evaluate the key factors that affect our visual experience and comfort, such as blue light filtering and flicker perception.

Let’s take a closer look at the details of this up-to-date label.

Growing awareness of vision health

There is widespread interest in the smartphone industry to address issues related to screen time, up-close reading, and maintaining optimal eye comfort. For example, in France, the average screen time in 2023 was 4 hours and 37 minutes, reaching up to 7 hours for teenagers.1

One major concern is blue light exposure, which has been shown to affect sleep quality, overall well-being, and even athletic performance.2 This is in part because blue light (within a specific wavelength range) inhibits the production of melatonin, the hormone responsible for sleep.

In addition to blue light, other factors like flicker and luminance levels can potentially affect vision and overall well-being.

Screens are not harmless, but in relatively rare and severe cases, they can cause epileptic seizures. Flashing lights or certain visual patterns can trigger seizures in 3% of people with epilepsy,3  a condition known as photosensitive epilepsy.

The issue of eye comfort has gained significant attention in recent years, attracting both media and public interest. According to Meltwater, mentions of the topic in media and social media increased by 26.9% from 2022 to 2023.

China is grappling with a major public health issue related to smartphone addiction, particularly among college students. It also has high rates of cell phone addiction among teenagers and young adults.4

What do people think?
Data from a DXOMARK survey of 1,737 participants on Weibo, WeChat, Instagram, and X (Twitter) in February 2024 revealed that:
83% of the respondents are concerned about the impact of daily smartphone use on their sleep and vision;
43% consider eye comfort when buying a new smartphone;
71% already use eye comfort mode or night mode.

 

Solutions from the smartphone industry

Eye comfort and care have become a selling point in the smartphone industry. For example:

  • Apple’s IOS 17 includes vision health features;
  • Huawei’s eye comfort mode is designed to reduce eye strain by minimizing blue light;
  • Honor promoting eye comfort with its latest flagship;
  • Xiaomi has also recently published a white paper on eye care.

Also, most devices now include a night mode or blue light filter feature.

DXOMARK’s four major criteria

eye comfort picto

The Eye Comfort Label serves as a specific guide for users to enjoy their smartphones without neglecting potential impacts on visual comfort. The DXOMARK label is structured along four aspects that are recognized by scientific communities. To receive this label, devices need to pass four tests by meeting the thresholds set by engineering teams.

  1. Temporal light modulation

Temporal light modulation is a technique used by manufacturers to manage the luminance output of the screens. This modulation creates temporal light artifacts such as flicker and stroboscopic effects, all of which are unwanted visual effects.

The temporal light modulation criterion assesses flicker perception, which is the change in luminance perceived by the human eye, often characterized by quick oscillations of light output between on and off.

The Flicker Perception Metric is a recommended metric for assessing the direct perception of light source flicker.5 The metric quantifies the amplitude and frequency of the modulation and compares it to human detection thresholds. A metric below 1 indicates that less than 50% of people will perceive the flicker. The DXOMARK experts have adopted this value as a basis for evaluation. If the Flicker Perception Metric is <1 (in anti-flicker mode or default mode), the device passes the evaluation.

What is flicker?

Flicker relates to Temporal Light Modulation inferior to 90 Hz and can be perceived depending on frequency and individual’s sensitivity.

The physiological response of flicker includes:

  • dilation and constriction of the iris in response to changes in brightness;
  • an involuntary reaction that can lead to headaches and eye fatigue, especially after prolonged exposure to flickering displays.

The effects of flicker are more noticeable in low-light conditions, such as reading in bed with the lights off. In such settings, the strain on the eyes is increased, contributing to increased discomfort and fatigue.

 

2. Brightness level

Most smartphones have an automatic brightness feature that adjusts to your environment. Also known as auto-brightness or adaptive brightness, this feature is designed to enhance the user’s viewing experience by dynamically adjusting the screen brightness based on the ambient light level.

The purpose of this evaluation criterion is to ensure that the automatic brightness mode prevents screen glare in low-light conditions while still providing sufficient brightness for visibility.

Based on extensive experience, user feedback, and a database of thousands of device tests, our experts have determined that the auto-brightness feature should provide a minimum luminance level of 2 nits to ensure a comfortable experience for the most sensitive users.

3. Blue light filtering

Of all the different types of light, blue light has the most significant effect on melatonin inhibition. This concept is further explored in a scientific paper published in 2015, “Analyses of circadian properties and healthy levels of blue light from smartphones at night.”6

Blue light filtering is an important consideration when evaluating the impact of smartphones on sleep hormones at night. One way to measure this is to use the circadian action factor (CAF).

This ratio evaluates the effect of different types of light on sleep cycles by looking at how they inhibit melatonin, the hormone associated with sleep.

The CAF, as proposed in the paper, is a metric that measures the efficiency of filtering blue light without negatively affecting visual efficiency. It is calculated as the ratio of circadian efficiency to visual efficiency.

At DXOMARK, we use a maximum CAF of 0.65.

An average CAF of 0.65 is comparable to that of a regular white LED light source, which is commonly used.

Sensitivity functions of the human eye

The human eye has specific sensitivity functions that affect both vision and sleep. In the graph below7, the blue line shows the peak of circadian sensitivity, which occurs at a light wavelength around 450 nanometers (which corresponds to blue).On the other hand, the yellow line represents visual sensitivity. It’s worth mentioning that a low circadian factor indicates a minimal impact on sleep.

Circadian Sensitivity

 

4. Color consistency

When looking at the impact of a blue filter mode on color performance, it is important to evaluate how the color display is affected. With a blue filter, the white point of the display becomes slightly more orange, and there is a reduction and shift in the color domain covered.

Example of a device’s color rendering with BLF off
The same device’s color rendering with BLF on

Ensuring consistent color nuances is a challenge, and this is exactly what we test by evaluating color consistency. We use the Display-P3 color gamut, which is widely used by manufacturers.

Typically, most smartphones today cover 100% of the Display-P3 color space without a blue filter mode. We believe that a blue light mode is beneficial as long as it does not degrade the overall user experience. At DXOMARK, we set a threshold of 95% or more coverage of the Display-P3. This minimum coverage of the Display-P3 ensures a certain level of comfort for smartphone users.

After all, what is the point of implementing a blue filter or night mode if consumers don’t use it because it negatively impacts their experience?

Where to find the label

Each tested product has a high-level product review on dxomark.com. Products that pass will display the label on the page.

On each test results page, you will find detailed information about each of the device’s results, including a breakdown of the eye comfort criteria.

PRODUCT REVIEW
TEST RESULTS

“This label is designed for users of all technical abilities. It helps consumers to choose the right device and incorporates key factors from industry and research. Our ratings are open to everyone and are continually updated with new research.”

Thibault Cabana,
Display Quality Evaluation Director at DXOMARK

What’s next in eye comfort?

DXOMARK experts are exploring to establish a metric for evaluating the Phantom Array Effect, a recognized temporal light artifact for which there is currently no standard.

The Phantom Array Effect is the perception of seeing repeated images of the light source when making a rapid eye movement (saccade) across a modulated source. Our teams are working with leading academics such as Professor Emeritus Arnold Wilkins, a renowned authority on vision research, to study and define a precise threshold for this phenomenon.

“Often overlooked, temporal light artifacts play a significant role in visual comfort and warrant thorough evaluation. DXOMARK is pioneering the introduction of innovative metrics to the smartphone industry.”

Arnold Wilkins,
Professor Emeritus at the University of Essex

We are entering a new era in vision care that promises to make screens more comfortable to use and easier on the eyes.

1    1 Source: franceinfo, Screen time in France
2    2 Source: NCBI, The effects of blue light
4    4 Source: ScienceDirect, Smartphone addiction around the world
5    5 Source: Alliance for Solid-State Illumination Systems and Technologies (ASSIST), Flicker Metric
6    6 Source: NCBI
7    7 J. H. Oh, H. Yoo, H. K. Park, and Y. R. Do, “Analysis of circadian properties and healthy levels of blue light from smartphones at night,” Scientific Reports, vol. 5, no. 1. Springer Science and Business Media LLC, Jun. 18, 2015. doi: 10.1038/srep11325.

The post DXOMARK’s Eye Comfort Label: A guide to better screen time appeared first on DXOMARK.

]]>
https://www.dxomark.com/eye-comfort-label/feed/ 0 Woman use of smart phone on bed at late night Woman use of smart phone on bed at late night Eye Comfort Picto EN Circadian Sensitivity color_accuracy_eyecomfort_OFF-3 color_accuracy_eyecomfort_ON-3 ECL_Product Review_Screenshot 2024-05-20 152619 ECL_Test Results Screenshot 2024-05-20 152126
DXOMARK Decodes: Understanding HDR imaging https://www.dxomark.com/dxomark-decodes-understanding-hdr-imaging/ https://www.dxomark.com/dxomark-decodes-understanding-hdr-imaging/#respond Tue, 30 Apr 2024 12:00:57 +0000 https://www.dxomark.com/?p=169691&preview=true&preview_id=169691 Taking photos and videos in a High Dynamic Range format is now a feature of many of the latest flagship smartphones.  HDR video formats have been standardized to some extent as HDR10, HDR10+, Dolby Vision, and HDR Vivid. That’s not the case with HDR photos. One trend that seems to be emerging from the latest [...]

The post DXOMARK Decodes: Understanding HDR imaging appeared first on DXOMARK.

]]>
Taking photos and videos in a High Dynamic Range format is now a feature of many of the latest flagship smartphones.  HDR video formats have been standardized to some extent as HDR10, HDR10+, Dolby Vision, and HDR Vivid. That’s not the case with HDR photos. One trend that seems to be emerging from the latest flagship releases is that smartphone makers are creating their own HDR ecosystems with proprietary HDR photo formats. In order to reap the visual benefits of HDR, still images not only need to be taken with a brand’s device but also viewed on the same brand’s display. This means that one brand’s HDR photo will not look the same on another brand’s device, even if it, too, is an HDR display. These compatibility issues can be quite confusing for general consumers, who might expect HDR pictures from their smartphone cameras to look the same across different brands’ devices.

In this Decodes article, we will venture into the realm of High Dynamic Range (HDR) images, a domain that encompasses both the capture and display of a wide range of brightness and colors. Although HDR technology has traditionally focused on capturing images, recent innovations primarily concern how these images are displayed. HDR displays and formats have long been established in the cinema and video industry, yet their equivalent has not fully materialized in the domain of still photography. This article will predominantly explore the HDR format within the context of still images. We will discuss newly established standards, delve into the mechanics of gain maps, and provide practical examples to help you understand this technology. Whether you’re a photography enthusiast, tech enthusiast, or simply curious about HDR, this article will give you a solid understanding of these developments with valuable insights into this ever-evolving field.

What is HDR ?

In photography, Dynamic Range (DR) has different definitions depending on what we are talking about. The dynamic range of the scene is the ratio of the highest luminance to the lowest luminance in the field of view of the scene. Dynamic range is usually expressed in stops of light and can vary drastically!

“Stops” are a way of expressing the ratios of luminance in the world of photography. By convention, stops are expressed on a logarithm base 2. For instance:

  • 1 stop doubles (or halves) the amount of light
  • 10 stops between low lights and bright ones mean a range of 1 to (2 to the  power 10) = 1024

Human vision is remarkably adaptive, allowing us to perceive an extensive range of light levels. This unique feature allows us to perceive the world in environments that range from the soft glow of moonlight to the intense brilliance of a sunny day. Our eyes effortlessly adjust to these diverse lighting situations, ensuring that we can discern details in both the darkest and brightest areas of our surroundings. There is no definition of high dynamic range for a scene, but as a general guide, let’s say that a scene with more than 10 stops of dynamic range is considered High Dynamic Range (HDR). Of course, this is only a lower bound, and HDR scenes with more than 20 stops of dynamic range are possible.

Only a part of this dynamic range is captured by the camera. The ability of a camera to capture the dynamic range of the scene depends on many factors: the sensor (which has its own definition of dynamic range), the lens (which may reduce the dynamic range by introducing flare), the acquisition strategies (multiframe exposure bracketing), and the processing of one or multiple captures to reconstruct a representation of the scene. This can involve many complex steps like global and or local corrections, merging frames, denoising images, etc.

Visualizing light dynamics: typical contrast ranges of different technologies. The visual system of humans boasts a remarkable ability to adapt to a broad spectrum of lighting conditions, covering a vast dynamic range. High Dynamic Range (HDR) technologies for image capture have striven to recreate the full range of light that our eyes can perceive.

Even though a scene has a dynamic range of 12 stops does not mean that the captured 12 stops of HDR will be displayed on a device! In order to view or share a photo, it needs to be rendered. The rendering process will further limit the captured image’s dynamic range. For example, a photo rendered for printing could reduce the 12-stop DR capture to only  6 stops. Computer monitors can do better.  A standard LCD display can display up to 8 stops of DR, and for many years the number of luminance levels on a standard monitor was limited to 256 (8-bits max). Consequently, this restriction also influenced image storage formats, such as JPEG, confining them to the same 8-bit range.

As we can see, while we can capture high dynamic range content, the challenge is to fit that expansive range into the constraints of the medium. It requires condensing the vast range of light levels into the narrower range while keeping the perception of contrast. This is called HDR tone mapping.

HDR scene, HDR capture, HDR tone mapping: an example

Smartphone cameras capture HDR images using techniques like exposure bracketing, in which the same scene is captured in different exposures, and multiple frames merging, in which several captured frames are combined to form one image. Many image algorithms are involved and implemented thanks to the help of hardware accelerators called ISP (Image Signal Processing). During this process, images are stored on a very large number of bits (up to 18 bits on the latest ISP!).

With standard dynamic range (SDR) storage format and displays, the vast amount of information collected will be compressed into 8-bit code values, which can not fully represent the complete dynamic range. Consequently, this compression may result in a loss of contrast and detail in the final image. When viewed on an SDR display, only a portion of the scene can be faithfully reproduced, further limiting the ability to showcase the full richness of the moment.

Consider, for instance, an indoor portrait captured near a well-lit window on a sunny day. This everyday scenario is precisely what DXOMARK’s HDR portrait setup is designed to replicate. In such a scene, some regions span a wide spectrum of real-world light levels.

Schematic example of scene dynamic range in terms of nits levels. This scene can be considered as HDR since it has more than log2(5000/0.5) = 13bits of dynamic range!

 

Smartphone cameras often employ the concept of multi-exposure bracketing to capture a wide range of visual information. When this image is encoded in 8 bits and displayed on SDR monitors, it results in a reduction of dynamic range compared to the original scene’s dynamic range.

 

HDR displays, HDR Format, HDR Ecosystem

With the introduction of different technologies, such as OLED, or local dimming for LCDs, the screen dynamic range has increased. The brightest pixel can go very bright, while the darkest one (if no reflection occurs on your screen) can go very low.  The measure of display luminance is expressed in candelas per square meter (cd/m²), which is sometimes referred to by its non-SI name, nits. In the past, a typical display could achieve a maximum luminance of about 200 cd/m² (nits). Today, some advanced monitors can support 1000 cd/m² (nits)  across their entire screen, with peak luminance reaching up to 2000 cd/m² (nits) This is a significant increase in brightness compared to traditional displays.

These displays are designated as “HDR” because their high peak luminance and ability to preserve deep blacks allow them to deliver a much wider range of luminance than ever before. The range of colors that can be displayed also improved. This large dynamic and color range cannot be exploited successfully with only 8 bits of data in input. To avoid artifacts such as banding and quantization, manufacturers require input data to be stored on 10 bits.

Computer display manufacturers have agreed through the Video Electronics Standards Association (VESA) to define some HDR performance levels. Within the DisplayHDR-500 category, an “HDR” display must fulfill some constraints such as 500+ cd/m² of peak luminance, at least 11.6 stops of contrast on a white/dark checkerboard, 10-bit inputs, and at least 8 bits of internal processing with some higher frame rate to simulate the two last bits (technically 8+2 FRC).

The 10-bit input is one of the reasons why to fully benefit from the performance of the display, one needs to define new image formats, known as HDR Photo formats. These file formats contains 10 bits of data, but also some side data (aka metadata) to help the playback system interpret correctly the content to be displayed, knowing the characteristics of the screen.

HDR video can be delivered using different Electro-Optical Transfer Functions (EOTFs), color primaries and metadata type. These different EOTFs, color primaries or metadata are standardized and published as recommendations by organizations such as SMPTE and ITU. For example, the Perceptual Quantizer (PQ) EOTF is standardized by SMPTE in ST-2084 as well as by ITU in Rec. 2100, HLG as a transfer function is standardized by ITU in Rec. 2100. The color primaries and viewing conditions are also standardized by the ITU in Rec. 2100 and Rec. 2020.

Using these recommendations, there are different HDR formats in which video contents are encoded and delivered. HDR10+, Dolby Vision, HDR Vivid, etc. are all different HDR formats that use dynamic metadata (SMPTE ST 2094) where the entire video uses the metadata on a scene-by-scene basis, trying to preserve the artistic intent to the greatest extent. HDR10 uses static metadata (SMPTE 2086), thus the same metadata is used for each frame of the video. HLG (based on the HLG EOTF) is another HDR format that has no metadata requirements but is backward compatible with SDR content.

So the display is only one piece of the puzzle. To fully enjoy a scene’s high dynamic range, you need a whole HDR ecosystem: a camera that is capable of capturing and encoding the wide dynamic of a scene; a photo format file that can store it; a display that can restitute a wide range of luminance and colors; and a playback system that can support both the HDR format and the HDR display.

On a side note, the viewing environment is also a limiting factor to fully enjoy the HDR experience. In particular, the environment is driving the human eye adaptation, therefore changing our perception of dark and bright levels. This is the reason why grading studios use strictly standardized lighting conditions when using HDR monitors. Smartphones also propose HDR displays, but the viewing environment is much less controlled. The adaptation of the smartphone display remains to this day one of the main challenges for HDR, as we shall see in a future article.

The HDR Experience

What can we expect from the HDR experience in photography? Potentially brighter images, more pronounced contrast, more realistic colors, but also the capacity to encompass a much larger portion of the scene’s luminance range without sacrificing contrast, which occurs when we are limited to 8-bit storage and displays. In practical terms, this means that with HDR we can illuminate parts of the scene beyond the capability of even the brightest SDR representation. This expanded range in HDR displays is commonly referred to as “headroom” and requires the definition of a “reference white.”

In the world of visual displays, a reference white is like a standard benchmark for brightness, and it serves as the foundation for all other colors. The white background of this web page (provided you are in non-dark mode) is the reference white of the screen.

There’s a common misconception about HDR, often reduced to the idea that it’s all about making screens brighter to replicate the brightest whites more faithfully. While there’s some truth to this notion, it oversimplifies the concept of HDR and what these “brightest whites” truly represent. In reality, HDR goes beyond mere brightness enhancement; it’s about preserving intricate details and contrast, especially in highlights.

Regardless of the specific HDR standard in use, a fundamental goal is to extend the displayed dynamic range. Conventional displays have faced challenges when it comes to rendering details, particularly in highlights. On a standard display, the reference white adheres to specifications at 100 cd/m². It might be logical to assume that on a good HDR display, this reference white would shine at well over 1000 cd/m², given the focus on brightness.

However, this assumption isn’t entirely accurate and underscores a critical aspect of current HDR technology. In HDR, the reference white level remains about as bright as it does on a standard display, and it is set to 203 cd/m²by the current ITU standard [1]. The HDR standard is indeed engineered to handle highlight details far more effectively. Think about your surroundings: Even in bright sunlight, a plain white piece of paper on your desk isn’t as radiant as the sun or the gleaming specular highlights from polished metal surfaces. HDR’s true strength lies in faithfully replicating these variations in brightness and the intricate details they hold.

In essence, the average brightness of elements like human faces and ambient room lighting in HDR, when skillfully graded, doesn’t significantly differ from what we experience in SDR. What sets HDR apart is the significant headroom it provides for those brighter areas of the image, exceeding the conventional 100 cd/m² level. This expanded headroom offers creative freedom during the grading process, bringing out the textures of sunlight-dappled seawater or the intricacies of textured metals like copper with a heightened sense of realism. Furthermore, it enables the portrayal of bright light falling on a human face without compromising skin detail or color, ushering in new creative possibilities for visual storytelling.

This expanded range allows tones and colors to have more space to express themselves, resulting in brighter highlights, deeper shadows, enhanced tonal distinctions, and more vibrant colors. The outcome is that photos optimized for HDR displays deliver a heightened impact and a heightened sense of depth and realism, making the visual experience far more immersive and captivating. However, the appearance of HDR content is susceptible to variations across different devices, owing to the diverse capabilities of HDR displays and the distinct tone mapping methods employed by various software and platforms.

The gain map solution

The wide variation in display capabilities inherently poses challenges for HDR content creators, as it complicates the control or prediction of how their images will be rendered on different devices. To tackle this issue, smartphone industry OEMs have implemented several solutions based on the “gain map” concept. This method offers a practical way to ensure consistent and adaptable HDR image display. It cleverly incorporates both SDR and HDR renditions within a single image file, allowing for dynamic transitions between the two during display.

What do these new image files contain?

Recalling the luminance scale, typical SDR images, define black and white as 0.2 and 100 cd/m², respectively. In contrast, HDR images define black and a default reference white as 0.0005 and 203 cd/m², respectively, signifying that everything above 203 cd/m² is considered headroom.

The gain map essentially serves as the quotient of these two renditions. The image file contains an SDR or HDR base rendition, the gain map, and associated metadata. When displayed, the base image is harmoniously combined with a proportionally scaled version of the Gain Map. The scaling factor is determined by the image’s metadata and the specific HDR capabilities of the display. To optimize storage efficiency, gain maps can be down-sampled and compressed, ensuring that they seamlessly enhance the viewing experience across various platforms and devices.

In 2020, Apple pioneered the use of gain maps with the HEIC image format. The iPhone images have incorporated additional data that enables the reconstruction of an HDR representation from the original SDR image. This approach has now been standardized in the iOS 17 release [2].

Google, with the Android 14 release also implements a gain map method called Ultra HDR [3], while the gain map specification published by Adobe [4], provides a formal description of the gain map approach for storing, displaying, and tone mapping HDR images.

For our readers who do not have access to an HDR display and an image viewer that supports it, let’s try to simulate the effect of the gain map on some images.

Consider this 8-bit image of a person sitting in front of a window:

SDR base rendition. Image taken with Google Pixel 8 Pro

The image displays well when the wall’s white paint is just below the reference white. However, any brighter elements, such as the window, tend to be compressed or rolled off. When examining the gain map image embedded within the 8-bit file, it becomes evident that this file functions as a mask. It identifies the areas of the image that will be enhanced when viewed on an HDR monitor.

Gain map image that is stored with the SDR base rendition

As a result, the image displayed on an HDR monitor will look more realistic and detailed than the image displayed on a traditional monitor. Here is an illustrative animation, designed to be seen on SDR displays, of the image enhancement when transitioning from SDR to HDR display. Beware, this is only a simulation! The transition between an SDR image and an HDR image on an HDR display with a proper setup would be more impressive!

SDR to HDR Comparison
Simulation to illustrate the enhancements in highlights when a gain map is gradually applied. This animation of the transition between SDR and HDR is for illustration purposes on an SDR display and does not fully render the visual experience of a transition between SDR and HDR on an HDR playback system

The advantages of HDR format truly shine when the content captured by the camera spans a broad dynamic range of light levels. Take, for example, this image of a night portrait. The scene has a wide range of light levels, from the bright spotlights in the background to the dark night sky. The model’s face is also lit by artificial lights, creating a high contrast with the surroundings. This high-contrast scene creates a visually stunning effect when viewed on an HDR display. The following animation illustrates the improvements in colors and lightness contrast when transitioning from SDR to HDR visualization.

SDR to HDR Comparison
Simulation to illustrate the enhancements in highlights when a gain map is gradually applied. This animation of the transition between SDR and HDR is for illustration purposes on an SDR display and does not fully render the visual experience of a transition between SDR and HDR on an HDR playback system

Below are some images captured with the Google Pixel 8 Pro, encoded in the Ultra HDR format, and with the Apple iPhone 15 Pro Max.  To fully appreciate these images in HDR, we recommend the following:

  • Use a macOS (Sonoma version is recommended) or Windows system.
  • View in Google Chrome (version 116 or later) or Microsoft Edge (version 117 or later).
  • Utilize an HDR display that supports brightness of 1000 cd/m² or more.

Please note that HDR photos may not display correctly on other browsers and platforms. If you’re viewing from a mobile device, you might need to switch to desktop mode in your browser. Unfortunately, as of our current understanding, there is no web browser that currently supports Apple Gain Map images. To fully appreciate iPhone HDR pictures; therefore a recommended approach is to download them and view them either in the iOS Photo App or the macOS Photo App.

For optimal viewing, we recommend the following displays:

  • Recent premium or ultra-premium smartphones.
  • Apple XDR displays, such as those on a MacBook Pro (2021 or later).
  • Any display that is VESA-certified as DisplayHDR 1000 or DisplayHDR 1400.

Enjoy the vivid and lifelike colors that HDR imaging has to offer!

Outdoor – Google Pixel 8 Pro
Outdoor – Apple iPhone 15 Pro Max
Night – Google Pixel 8 Pro
Night – Apple iPhone 15 Pro Max

[1] Report ITU-R BT.2408-2

[2] Applying Apple HDR effect to your photos

[3] Ultra HDR Image Format

[4] Adobe Gain Map

 

The post DXOMARK Decodes: Understanding HDR imaging appeared first on DXOMARK.

]]>
https://www.dxomark.com/dxomark-decodes-understanding-hdr-imaging/feed/ 0 HVS scene BracketingScene t0 wide05b-pixel8pro_gainMap 019_8pro 019_15ProMax widenight04-pixel8pro widenight04-iphone15promax_ref
What’s new in DXOMARK’s Display protocol? https://www.dxomark.com/whats-new-in-dxomarks-display-protocol-update/ https://www.dxomark.com/whats-new-in-dxomarks-display-protocol-update/#respond Thu, 25 Apr 2024 09:56:53 +0000 https://www.dxomark.com/?p=170403 DXOMARK’s smartphone protocols are known for evolving with the latest innovations in technology and with the trends in how people use their devices. We regularly organize qualitative and quantitative research with consumer panels in order to capture what truly matters to consumers, anticipate their real-life usage and assess the latest innovations.The results of our studies [...]

The post What’s new in DXOMARK’s Display protocol? appeared first on DXOMARK.

]]>
DXOMARK’s smartphone protocols are known for evolving with the latest innovations in technology and with the trends in how people use their devices. We regularly organize qualitative and quantitative research with consumer panels in order to capture what truly matters to consumers, anticipate their real-life usage and assess the latest innovations.The results of our studies feed directly into our periodic updates of our testing protocols.

In 2020, DXOMARK launched its Display protocol, which combined lab measurements and real-life tests to evaluate thoroughly the user experience, for example, when browsing the web, watching videos, or viewing photos. But over the years, screen technology has advanced, and users’ habits and expectations have also evolved.

For example, some of the trends we’ve noticed include:

• Devices with higher screen luminance, which could lead to improved readability outdoors
• Various HDR formats that present new challenges to the screen experience
• Color settings adapted to local preferences
• Growing awareness among smartphone users that screen time can affect vision.

So after 4 years, we are now releasing a major update to our already thorough protocol, Display (Version 2), which acknowledges and takes into account the evolution of the smartphone display experience while maintaining DXOMARK’s high testing standards.

This revamp of the protocol aims to make the Display protocol easier to grasp understandable and relevant to readers with:

• new and more precise measurements
• a reorganized scoring structure that brings the number of subscores to four from six
• adjusted score weightings

We’ll go through all the new elements of the updated protocol in greater detail further in the article.

In conjunction with the release of Display v2, DXOMARK is also introducing the Eye Comfort Label, which will give consumers an instant assessment of the device’s user experience in dim light, based on a collection of metrics from our protocol.

Core changes to the protocol

The overall Display score is now derived from four subscores instead of six. Motion, fully linked to video content, has been integrated into our video sub-score while artifacts have been spread across the different subscores in which they have a direct impact. For example, reflectance and flicker, artifacts that can affect the actual readability of the smartphone have been integrated into the Readability subscore.

The new weightings will be as follows:

DISPLAY SCORE WEIGHTINGS

 

Readability

The most important consideration for end users is how easily and comfortably they can read the display under real-life conditions. The changes in this subscore were driven by the trends we saw in the new phone releases and the attention paid to improving the user experience. For example, new devices were being optimized for outdoor conditions, pushing the boundaries of peak screen brightness, which is supposed to improve readability in bright conditions.

But a phone’s screen peak luminance is not the only factor when evaluating the device’s performance in bright environments and outdoor conditions; other aspects play an important role as well, such as good tuning as well as the reflectance ratio.

In our state-of-the-art laboratories, we can measure and challenge the display’s maximum brightness capabilities by simulating bright outdoor conditions. For this subscore, ambient adaptation testing has been expanded to include two new outdoor conditions at 20,000 lux and 50,000 lux, including diffused light, in addition to the range of lighting conditions we had before: Indoor (250 lux through 830 lux) and Low light (0 lux through 25 lux).

The artifacts flicker and reflectance have also been integrated into this subscore because they affect the readability experience. Flicker, the quick turning on and off of light, is a temporal artifact that is not perceptible to most people, but it possibly can contribute to eye fatigue when it occurs at a low frequency and a high peak, and this has an impact on readability. Reflectance, which shows how light is dispersed through the display, reduces the readability of the viewed image or information, having an impact on readability.

Color

Color fidelity is the ability of the display to faithfully reproduce the exact same hues and shades from the collected color information. The previous version of our protocol tested color only in the device’s default mode. But through our testing, we saw that a device’s default color performance in relation to the standard white point reference for outdoor conditions (D65) showed a marked difference depending on geographical preferences.

That’s why we have decided to make a major change to our color testing. To measure color accuracy, for example, we have added metrics to evaluate the “natural” or “faithful” color mode, if that mode is not the default. However, some metrics in the Color subscore will still only be tested in default mode, such as on-angle color shifts — the reason being that any angular color shift would be visible regardless of default or faithful color mode.

This subscore also includes tests for the device’s chromatic adaptation to ambient light, meaning how well it preserves the appearance of the objects’ colors in relation to the reference white point when the screen is adjusting to changes in lighting. In addition to the indoor and outdoor environments we were testing, which span 830 lux to 20,000 lux, we’ve now added challenging low-light measurements, which span 0 lux to 25 lux to the color attribute.

Video

From a technical perspective, the higher screen luminance in more performant devices should be a plus for the HDR video-watching experience, even though it presents challenges in tone mapping. In addition, the variety of HDR formats and the constraints in being able to view all the benefits of HDR content mean that users are probably not getting a consistent HDR experience.

A dedicated qualitative survey1 that we conducted on the topic revealed that users had different expectations from their screens when viewing HDR content and that those preferences (for example on contrast and brightness) depended heavily on the lighting environments.

Video evaluations were previously limited to a dark room (<5 lux) environment. In the new version of the Display protocol, we have taken into account that people watch video content in various lighting conditions, so we’ve added a new bright indoor (830 lux) evaluation, with lab measurements as well as perceptual analyses.

Display v2_Color
Devices undergoing perceptual tests against the reference monitor.

Under these lighting conditions, our metrics will cover, among other things, the extent of the color area that the device can render, color accuracy, and luminance. We are enriching the video testing with a new perceived contrast metric derived from the HDR Electro-Optical Transfer Function (EOTF), which measures the conversion of an electronic signal into a certain level of brightness on a display.

Lighting conditions have a strong impact on a display’s brightness and reflection. The device must adapt its tone curve to every changing situation in order to provide a perceptually uniform rendering. We analyzed these strategies, and our findings on HDR Playback perceptions enabled us to define a range for contrast that would correspond to user preferences while keeping the rendering acceptable.

Another change to our Video subscore evaluation is the integration of motion testing.  Here, aspects of video are measured and evaluated such as frame drops (when the display fails to display frames properly before moving to the next frame), which could lead to video artifacts such as stutter (when the same frame is shown twice before displaying the next frame).

Touch

The Touch subscore remains generally unchanged, although evaluations have been refined. For example, response time measurements will reflect average, minimum and maximum performances. Touch accuracy will also look for reactions to unwanted touches.

We have also integrated the specific “jello” artifact as part of a foldable device’s smoothness evaluation.

Introduction to the Eye Comfort Label

It is well-documented that people are spending more time in front of their phone displays. Reports and studies have also linked increased screen time to eye fatigue or trouble sleeping. Consumers in general are growing more aware that high screen usage can have some health repercussions.

Smartphone makers have started to include features on their latest devices that help users monitor their time in front of the screens or allow them to activate blue-light filters or night mode to try to alleviate potentially harmful effects.

Our consumer surveys2 also have shown that users are factoring eye comfort into their buying decisions. The majority of the users we surveyed declared that they were concerned about their vision health. As a result, almost half of our respondents said that they paid attention to the mention of eye comfort. Many smartphone users responded that they always have an eye comfort setting activated or that they make it a habit of activating  “Night mode” or  “eye comfort mode” every evening.

DXOMARK’s detailed measurements in the Display protocol can help determine to what extent the device helps to reduce eye fatigue or the impact on the sleep cycle. We have extracted the key metrics of the protocol that determine whether the display experience is easy on the eyes to create the DXOMARK Eye Comfort Label. Backed by solid measurements and transparency on the specific requirements needed to pass, our Eye Comfort label aims to be relevant to users. Please note that the Eye Comfort Label is not part of the Display protocol and does not factor into the scoring of the device.

Our criteria include:

  • Flicker / Temporal light modulation

Flicker is a phenomenon associated to the temporal modulation of light, the quick oscillation of light output between on and off on a screen. All screens have temporal light modulation to some degree because of the interaction between the screen’s refresh rate (whether 60 Hz, 90 Hz or 120 Hz) and Pulse Width Modulation (the power that turns the light on and off for a certain duration.) Flicker relates to temporal modulation frequencies inferior to 90 Hz .

Flicker is known to create unease, eye fatigue, or in the most extreme cases, seizures. Additionally, its impact varies considerably among individuals; some people are even able to perceive the modulation. The effect of flicker tends to be stronger under dim environments as the screen and our eyes adapt to the darker light.

That’s why this measurement is important for assessing display comfort.

DXOMARK measures the behavior of a smartphone’s flicker in order to assess flicker perception. For our Eye Comfort label, the detection probability of flicker should be below 50%, in default mode or with the anti-flicker mode activated (if available).

  • Brightness levels

The objective for brightness levels in night mode is that when the screen is suddenly activated in the dark or in low light, it does not shock the eyes and blind the user as the phone’s brightness levels automatically adjust to the environment. We require the screen to reach a maximum brightness level of only 2 nits, using the manual adjustment, to be considered for the label.

  • Blue light filtering

Our studies have shown that consumers are growing more aware of and concerned about the effects that blue light can have on vision and the sleep cycle. Our measurement can help illustrate how the screen performs with these features on or off.

For the Eye Comfort label, we test the night mode performance and the possible impact on the sleep cycle with metrics that are based on scientific research3 on the effects of lighting.

Artificial light can disrupt the circadian rhythm by inhibiting the production of melatonin, the hormone that helps us fall asleep. The circadian action factor, which is the ratio of the light energy that can affect the sleep cycle over the visible light energy (a contributor to vision) , is a metric that can help determine how likely the device will impact the body’s inner clock.

To meet the criteria for our label, the smartphone must have a measured circadian action factor of less than 0.65, which corresponds to the equivalent of a neutral white LED lamp, the kind you might use at home. The luminance from the smartphone display should not be any more disruptive to your circadian rhythm than the light in your home.

  • Color consistency

Color consistency looks specifically at the impact that the blue-light filtering mode has on color performance. It is a given that colors will shift when the blue light filter is activated, but the user experience should still remain as ideal as possible.  A well-tuned device will minimize the effects of the color shift. To meet our criteria for the label, the device must maintain 95% or more of the wide P3 color space coverage when the blue-light filter is activated.

If a product is granted the label, it will appear in the product review, however, the detailed measurements that are associated with the label will be available in the product’s test results. Labels will only apply to the qualifying products tested after April 25 (the launch of Display v2) and not to the retested devices that have been updated for the new version of the protocol.

New rankings in Display v2

Now that we’ve gone through the key changes in the Display v2 protocol, let’s see how and why some of the devices shifted positions.

Display v2 ranking

 

An overview of some retested devices

To give you a better idea of what to expect from the protocol updates described above, in this section, we give you an overview of the major changes on some specific devices.

Under this new protocol version, we saw many devices gain points in readability and video, benefiting from the additional lighting environments tested. Moving to faithful mode evaluation for color accuracy also impacted some devices.

Let’s take a few examples of popular devices to illustrate the protocol changes:

Honor Magic6 Pro
The Honor Magic6 Pro maintains a great overall display experience, remaining the top score in the new version of our protocol. Despite not bringing the maximum luminance of all the products we have tested, the screen remained consistent and comfortable in all lighting conditions, including the two very bright environments that have been added to the new version of our protocol. In video, the Magic6 Pro performed very well, particularly in low light, delivering good brightness and contrast when viewing HDR10 content. In the new indoor lighting environment (at 830 lux), the device delivered a high peak luminance compared to other typical devices, which for some users could detract from a comfortable viewing experience.

Samsung Galaxy S24 Ultra
The Samsung Galaxy S24 Ultra maintained its position in our display ranking. When tested in the new environments, it delivered a consistent and excellent experience even in the brightest conditions with a high peak luminance and low reflectance, resulting in very good readability across the different lighting conditions. In addition, it showed a very good performance in video watching at 830 lux, but when watching in low light, the screen brightness was much too high to view HDR content comfortably.  As we had identified in our previous testing, the S24 Ultra’s display colors were more natural than its predecessors’ colors. In addition, testing in the faithful mode, the Samsung Galaxy S24 Ultra displayed a well-tuned natural rendering compared to default mode, despite reflecting some slight color shifts while being viewed at an angle, in a stronger way than its competitors.

Apple iPhone 15 Pro Max
The iPhone 15 Pro Max progressed considerably in our new ranking, particularly because of a strong showing in video, color, and touch. In color, testing without True Tone allowed the device to show accurate colors in all tested environments (the same experience for the iPhone 15). In readability, the device displayed a very high peak luminance and average reflectance in the new environments tested (at 50,000 lux), making the screen very readable when outdoors. However, this luminance strongly depended on the contents viewed (and on the associated APL level), meaning it was more appealing to view photos on the device than check a web page. In video, the device showed a very good performance in both environments (at 0 and 830 lux) with well-adapted brightness and contrast while viewing HDR10 content. Details were well managed indoors, while darker details were slightly too bright in dark environments.

Samsung Galaxy A55 5G
Further down in the price segments, the new version of our protocol made the Galaxy A55 5G shine, with the device gaining a few positions in the ranking. As for other Samsung products, the A55 5G delivered a very good video experience especially in our new indoor (830 lux) environment, especially for HDR10 content thanks to a proper level of brightness, despite a slight lack of luminance in indoor conditions. Tested in faithful mode, the device also showed good overall color rendering, although it showed some saturation in low-light conditions.

It should be noted that under the previous Display protocol, foldable devices benefited from bonus points linked to the size of the unfolded screen in relation to the size of the folded device. But this metric is no longer used in the scoring, resulting in some devices slipping in the ranking. There were some exceptions, though, where the suppression of the screen-to-body ratio bonus points didn’t make a difference to the final results. For example, the Honor Magic v2 and the Oppo Find N2 Flip, both rose in the ranking thanks to strong gains in the other subscores.

Conclusion

To summarize, this update to Display testing brings some new measurements to adjust to the evolution of both technology and device usage. We hope that these new measurements help make the Display protocol more transparent and easier to grasp.

Check out our Closer Look article for a more detailed look and videos about how we test displays.

At DXOMARK, we will continue to optimize our testing methodologies for new technology developments and changes in user behavior, so stay tuned for future updates.

1    1 HDR Playback user preferences, a qualitative survey by DXOMARK on a panel of 30 people in January 2024.
2    2 According to a survey run on Social media by DXOMARK on 1737 respondents in February 2024
3    1

Oh, J., Yoo, H., Park, H. et al. Analysis of circadian properties and healthy levels of blue light from smartphones at night. Sci Rep 5, 11325 (2015). https://doi.org/10.1038/srep11325

Gall, D. & Bieske, Karin. (2004). Definition and measurement of circadian radiometric quantities. Proceedings of the CIE Symposium ’04 on Light and Health. 129-132.

The post What’s new in DXOMARK’s Display protocol? appeared first on DXOMARK.

]]>
https://www.dxomark.com/whats-new-in-dxomarks-display-protocol-update/feed/ 0 displayv2_scoregraph (1) 20220112_Display_Dome_selection-9 HWT_display-6 HWT_display-5 (1) DXOMARk Display shoot-45 Eye Comfort Picto EN Slide22
A closer look at the DXOMARK Display protocol https://www.dxomark.com/a-closer-look-at-the-dxomark-display-protocol/ https://www.dxomark.com/a-closer-look-at-the-dxomark-display-protocol/#respond Thu, 25 Apr 2024 09:15:24 +0000 https://www.dxomark.com/?p=169707&preview=true&preview_id=169707 Earlier we presented you with the key points of what we test and score in the DXOMARK Display protocol. In this article, we’ll provide a closer look at our process of testing smartphone displays. We will look at the tools and methods that we use to scientifically evaluate display quality attributes, which are based on [...]

The post A closer look at the DXOMARK Display protocol appeared first on DXOMARK.

]]>

Earlier we presented you with the key points of what we test and score in the DXOMARK Display protocol. In this article, we’ll provide a closer look at our process of testing smartphone displays. We will look at the tools and methods that we use to scientifically evaluate display quality attributes, which are based on specific use cases that reflect the ways in which people use their phones: web browsing, night reading, in-car navigation, taking photos, viewing photos, gaming, and watching movies, including how smoothly and efficiently a display’s auto-brightness function responds to changing light conditions.

Before we head into the main topic, it’s important to remember that smartphone display performance isn’t just about display panel quality. Smartphones embed software with dedicted algorithms to control many display functions, and manufacturers choose which settings to use with those algorithms (a process known as “tuning”). Of course, some algorithms are more efficient than others, and the way an algorithm is implemented on a smartphone can make a big difference in performance, as in these examples:

  • Software determines how smartphones balance the trade-off between frame rate and battery usage; depending on the apps used, some phones automatically adjust the frame rate to extend a battery charge (and thus autonomy). What this means is that a smartphone with a refresh rate of 120 Hz does not always refresh the screen at 120 Hz (for example).
  • Many smartphones include an ambient light sensor, a photodetector that gauges surrounding lighting conditions; tuning determines how quickly and appropriately the auto-brightness feature responds to the input from the light sensor, in addition to how well the display adapts to the content being viewed.
  • When people watch videos on their phones, motion interpolation algorithms generate frames in between “real” (existing) frames with the aim of making animations or moving actions appear smoother, and again, the battery vs. frame rate trade-off can have an impact here. (We will visit algorithms again in some of our articles about specific display attributes. Read about the pivotal role software tuning plays in display performance here. )

DXOMARK conducts tests under many different (and sometimes changing) light conditions so as to recreate as closely as possible the real-world experiences of smartphone users, rather than just simply pitting display performance against “ideal” viewing conditions as defined in standards/norms.

Finally, as we head into our toolbox, just a short reminder that first, we test each and every display under the exact same conditions so as to ensure that our results are fair, scientifically rigorous, and repeatable. Second, apart from certain well-defined exceptions, such as color accuracy,  we test devices using their default settings. And third, DXOMARK measurements differ from those of other sites in that we not only include lab-based objective measurements, but perceptual measurements as well.

Objective testing tools

The images below show the array of tools our evaluation experts use when testing display light spectrum, color, luminance (brightness), contrast, uniformity, frame drops, and judder:

Testing devices that DXOMARK uses to measure display quality, from left to right, spectroradiometer, video colorimeter, video colorimeter with conoscope, and compact camera.

We use these tools to measure reflectance, gloss, flicker, and lighting conditions:

Other testing devices, from left to right: spectrophotometer, glossmeter, flickermeter, lux meter, and colorimeter.

We use the tools below to measure touch responsiveness, accuracy, and smoothness:

Ultra-high-speed camera and robot for measuring display touch attributes

We conduct many of our objective tests within the DXOMARK Display Bench, which is a special testing chamber that facilitates testing automation and ensures that our engineers test all devices under the exact same conditions. It includes mounts for devices being tested and for testing tools (principally a spectroradiometer and a video colorimeter), computer-controlled LED lighting arrays to imitate all kinds of lighting types and brightness levels, and lux meters.

A device under test and video colorimeter inside the DXOMARK Display Bench
A device under test and spectroradiometer inside the DXOMARK Display Bench

In both photos showing the inside of the DXOMARK Display Bench above, you can see a device under test (DUT) mounted on the left, with the testing instrument on the right mounted on a rail; testing engineers use computer-controlled servo motors to move the instrument to various distances from the DUT. During testing, the Bench is first sealed against any external light sources, and an engineer controls the tests via computer.

 

In addition to the Display Bench, we have developed a fully computer-controlled Dome System, which serves to reproduce more intense outdoor lighting conditions of up to 50,000 lux. The shape of the dome allows the very intense light to be diffused so that it hits the smartphone’s screen from all directions, in the same way that we experience lighting conditions outdoors. But the dome’s ability to reach extreme levels of brightness permits us to really challenge the limits of a device’s screen capabilities.

The Dome System in display testing

In the photo above, the DUT is attached to a rail within a chamber, with the screen facing the testing instrument. A lux meter sensor, which monitors the intensity of the light, is next to the DUT. The testing instrument, the Radiant imaging colorimeter, which is mounted on an external rail on the other side of the dome (not pictured), acquires contrast and brightness measurements through a hole at the top of the dome as the DUT’s screen displays testing patterns for measurement.

 

Every element of the system–the DUT, the motors (or modules) controlling the light levels, and the instrument– is controlled by a computer.

In the objective part of our testing, we measure color, contrast, luminance, reflectance, EOTF curve, etc. by displaying calibrated video and photo patterns on the device’s screen. The perceptual part of our testing involves playing a set of videos that we have produced and mastered, enabling a repeatable evaluation using real content in SDR and HDR formats.

A small sample of DXOMARK’s charts, patterns, and visuals that are used in display testing.

Perceptual testing tools

One of the most important tools DXOMARK relies on for its perceptual testing is the human eye. Our perceptual tests confirm and complement our objective tests, in that we want to be sure that we can see in real life what the objective measurements are telling us. Further, objective tests measure only what they are strictly aiming to measure. Given the complexity of the software driving the display as well as the complexity of the human visual system, perceptual tests are an essential ingredient in evaluating display quality.

Our Display protocol engineers receive careful and extensive training before conducting any perceptual tests, some of which involve closely evaluating multiple devices (a DUT and two or three comparison devices) against reference images displayed on a professional monitor. The color and brightness values of each carefully chosen image on the pro display have been precisely calibrated and measured. When making comparisons, engineers follow a very strict and scientifically sound protocol that requires conducting the test multiple times using different engineers each time to avoid any bias.

In addition to our most important perceptual tool (the human eye) our display engineers use a specially-designed smartphone apparatus for holding several smartphones at once, pro-level monitors, and lux meters.

Our engineers perform all perceptual evaluations by looking directly at the device’s display. We take photos only to use as illustrations, but never use them as a basis for any kind of test or evaluation.

Display protocol tests

The tables in each sub-section below cover all of the attributes that the DXOMARK Display protocol currently tests, and include information about the equipment we use, some of the testing conditions, and some of result parameters and definitions.

Readability

In our reviews, we regularly remind people that the most important consideration for end-users is how easily and comfortably they can read the display under different real-life conditions. DXOMARK uses its Display Bench and its dome system to recreate ambient light conditions ranging from total darkness to bright daylight (0, 25, 250, 830, 20 000, 50 000 lux).

Objective measurements done in the lab are always completed with perceptual evaluations, allowing us to assess the device’s performance in real-life situations.

Below is a sample graph of comparison data showing brightness/contrast measurements for three devices:

Luminance under various lighting conditions
This graph shows the screen luminance in environments that range from total darkness to outdoor conditions. In our labs, the indoor environment (250 lux to 830 lux) simulates the artificial and natural lighting conditions commonly seen in homes (with medium diffusion); the outdoor environment (from 20,000 lux) replicates a situation with highly diffused light.
Contrast under various lighting conditions
This graph shows the screen’s contrast levels in lighting environments that range from total darkness to outdoor conditions. In our labs, the indoor environment (250 lux to 830 lux) simulates the artificial and natural lighting conditions commonly seen in homes (with medium diffusion); the outdoor environment (from 20,000 lux) replicates a situation with highly diffused light.

In the example above, you can see how the measured contrast in daylight conditions does not live up to the claimed contrast values of 1:1,000,000 (or infinite), which are based on measurements taken in dark conditions (< 0.01 lux). Our measurements show what users experience — that is, it hard to read our screens in sunlight.

Another test of display readability measures the homogeneity or uniformity of brightness output, as shown in the illustrative image below:

Uniformity
Brightness uniformity test, left; False-color luminance map measurements, right.
Photos for illustration only

Our readability tests also include looking for things that affect readability and the user experience, such as artifacts, mainly flicker and reflectance.

Flicker is a phenomenon associated with the temporal modulation of light, the quick oscillation of light output between on and off on a screen. All screens have temporal light modulation to some degree because of the interaction between the screen’s refresh rate (whether 60 Hz, 90 Hz or 120 Hz) and Pulse Width Modulation (the power that turns the light on and off for a certain duration.) Flicker relates to temporal modulation frequencies inferior to 90 Hz .

Flicker is known to create unease, eye fatigue, or in the most extreme cases, seizures. Additionally, its impact varies considerably among individuals; some people are even able to perceive the modulation. The effect of flicker tends to be stronger under dim environments as the screen and our eyes adapt to the darker light.

That’s why this measurement is important for assessing display comfort.

DXOMARK measures the behavior of a smartphone’s flicker in order to assess flicker perception. For our Eye Comfort label, the detection probability of flicker should be below 50%, in default mode or with the anti-flicker mode activated (if available).

For example, flicker tests reveal that slow pulse-width modulation (PWM) can have an impact on visual comfort even for devices with a high refresh rate. (In the graph below, the first spike corresponds to the refresh rate, and the highest spike corresponds to the PWM.)

Temporal Light Modulation
This graph represents the frequencies of lighting variation; the highest peak gives the most important modulation. The combination of a low frequency and a high peak is susceptible to inducing eye fatigue.

Reflectance is an artifact that affects readability. Smartphone screens are reflective by nature. But the degree to which they reflect light affects the user experience. Reflectance impacts the contrast of the content being displayed.

To give you an example, the reflection from a simple glass sheet is around 4%, while it reaches about 6% for a plastic sheet. Although smartphones’ first surface is made of glass (or plastic for foldables), their total reflection (without coating) is usually around 5% (or higher) due to multiple reflections created by the complex optical stack that is sometimes coated with an anti-reflection layer.

To determine the device’s reflectance, we measure the reflected light intensity as a function of wavelength over the visible spectrum (400 nm to 700 nm). We use a spectrophotometer in SCI (Specular Component Included) mode to perform reflectance level measurements on smartphone displays when turned off.  The SCI mode measures both the diffuse reflection and the specular reflection.

We then calculate the average based on the measurements within the visible color spectrum.

We also use our spectrophotometer in SCI (Specular Component Included) mode to perform reflectance level measurements on smartphone displays that are turned off. Below are measurements that show the reflectance level for each 10 nm-spaced wavelength within the visible spectrum range (400 nm to 700 nm).

Average Reflectance (SCI) iPhone 15 Pro Max
4.8 %
Low
Good
Bad
High
Apple iPhone 15 Pro Max
Google Pixel 8 Pro
Samsung Galaxy S24 Ultra
SCI stands for Specular Component Included, which measures both the diffuse reflection and the specular reflection. Reflection from a simple glass sheet is around 4%, while it reaches about 6% for a plastic sheet. Although smartphones’ first surface is made of glass, their total reflection (without coating) is usually around 5% due to multiple reflections created by the complex optical stack.
Average reflectance is computed based on the spectral reflectance in the visible spectrum range (see graph below) and human spectral sensitivity.
Reflectance (SCI)
Wavelength (horizontal axis) defines light color, but also our capacity to see it; for example, UV is a very low wavelength that the human eye cannot see; Infrared is a high wavelength that the human eye also cannot see). White light is composed of all wavelengths between 400 nm and 700 nm, i.e. the range the human eye can see. Measurements above show the reflection of the devices within the visible spectrum range (400 nm to 700 nm).
Readability

Unless specified otherwise, all tests are conducted at light levels ranging from 0 to 50,000 lux and using white color temperature/spectrum tungsten, white LED, D65 color illuminants, etc.

Sub-attribute Equipment Remarks
Vs. ambient lighting Bench + spectroradiometer (brightness, given as cd/m2) + video colorimeter (contrast, given as :1) Light levels: 0,25,250,830 lux

Dome +
video colorimeter (brightness, contrast, given as :1) Light levels: 20,000 lux and 50,000 lux

Brightness should adapt to viewing conditions; screen content should be readable in any condition and be as close as possible to original intent.
Vs. average pixel level Bench + spectroradiometer (brightness) + video colorimeter (contrast) at 20,000 lux

Dome + spectroradiometer (brightness) + video colorimeter (contrast) at 20,000 lux and 50,000 lux

Neither brightness nor contrast should change with APL.
Brightness vs. time Light booth with changing lights and brightness levels. Light levels: 0 and 830 lux We investigate reaction time, smoothness, transition time
EOTF* Bench + spectroradiometer Tested under various light conditions (0, 830, 20,000 lux) at 20% APL; the closer to the target value of gamma, the better.
Uniformity Video colorimeter + standard lens Tested at 0 lux; results are given as a percentage (the higher, the better).
Vs. angle Video colorimeter + conoscope Tested at 0 lux; the lower the loss of brightness, the better.
Screen reflectance Spectrophotometer (+glossmeter with display off) A reflectance result of under 4% is considered good.
Flicker Flickermeter Flicker frequency corresponds to the highest peak on the graph. The higher the frequency, the better.

*EOTF stands for Electro-Optical Transfer Function, which converts an electronic signal into a particular level of brightness on a display.

All objective tests done in the laboratories are then complemented by a series of perceptual tests.

 

Color

From the end-user’s point of view, color fidelity — that is, having the display faithfully reproduce the exact same hues and shades that they see with their eyes — is second in importance only to readability.

We use a conoscope in the setup below to evaluate how color shifts when users view display content on axis versus when they look at content on a screen held off axis (tilted up to 70°).

Setup of conoscope-equipped video colorimeter (for illustrative purposes only; actual testing takes place at 0 lux).

We perform color fidelity measurements for different lighting conditions to see how well the device can handle color management under different ambient lighting conditions. Below is just one of our color fidelity results taken under a D65 illuminant at 830 lux.

Color fidelity
Each arrow represents the color difference between a target color pattern (base of the arrow) and its actual measurement (tip of the arrow). The longer the arrow, the more visible the color difference is. If the arrow stays within the circle, the color difference will be visible only to trained eyes. The tested color mode is the most faithful proposed by each device, and a color correction is applied to account for the different white points of each device.

Most smartphone manufacturers include a feature we refer to as a blue light filter (BLF); DXOMARK measures how effectively a phone reduces blue light, and what its impact is on overall display color rendering.

Spectrum of white emission with Night mode ON
Spectrum measurements of a white web page with BLF mode on and off. This graph shows the impact of blue light filtering on the whole spectrum. All other settings used are default, in particular, the luminance level follows the auto-brightness adaptation from the manufacturer.
The wavelength (horizontal axis) defines light color but also the capacity to see it. For example, UV, which has a very low wavelength, and infra-red, which has a high wavelength, are both not visible to the human eye. White light is composed of all wavelengths between 400 nm and 700 nm, which is the range visible to the human eye.
Spectrum of white emission with Night mode OFF
Spectrum measurements of a white web page with BLF mode on and off. This graph shows the impact of blue light filtering on the whole spectrum. All other settings used are default, in particular, the luminance level follows the auto-brightness adaptation from the manufacturer.
The wavelength (horizontal axis) defines light color but also the capacity to see it. For example, UV, which has a very low wavelength, and infra-red, which has a high wavelength, are both not visible to the human eye. White light is composed of all wavelengths between 400 nm and 700 nm, which is the range visible to the human eye.

Color

Unless specified otherwise, all tests are conducted at light levels ranging from 0 to 20,000 lux and using white color temperature/spectrum tungsten, white LED, D65 color illuminants, etc.

Sub-attribute

Equipment

Remarks

White point vs. ambient lighting (scored on faithful)

Bench + spectroradiometer

Result is the color temperature of the white point of the device (in Kelvin).

White point vs. time (scored on default)

Light booth

We investigate if the white point adapts to changes in ambient brightness, and if such adaptation is smooth.

Gamut vs. ambient lighting (scored on faithful)

Bench + spectroradiometer

Result is a percentage of the color gamut coverage (the higher, the better).

Color fidelity/accuracy (scored on faithful)

Bench + spectroradiometer

Results are the color difference between the target and the measurement, given in JNCD (“just noticeable color difference”).

Vs. angle (scored on default)

Video colorimeter + conoscope

Tested at 0 lux, results are the color difference expressed as JNCD; the less noticeable the color shift, the better.

Uniformity (scored on default)

Video colorimeter + standard lens

Tested at 0 lux; the fewer the color differences across the screen, the better.

Blue light filter impact (scored on night mode / blue filtering mode on)

Bench + spectroradiometer

Tested at 0 lux; blue component wavelength should decrease without changing the gamut.

 

Video

A device may be good at handling still image content better than video, or vice versa. DXOMARK tests displays using the device’s default video app. In the images below used to illustrate video test results, you can see that the device on the left has low brightness but is still visible; the image in the center has good brightness; and the device on the right is quite dark. As for color, the left-hand device shows good color rendering; the middle device has a yellow cast; and the right-hand device is too blue.

Device has low brightness, but has good color
Device has good brightness but is slightly yellow
Device output is too dark and too blue
Photos for illustration only

 

Video

Tested in standardized conditions at 5 cd/m2 and in 0 and 830 lux lighting conditions.

Sub-attribute

Equipment

Remarks

Brightness/Luminance

Bench + spectroradiometer

Device brightness should be visually comfortable in low-light and indoor conditions

EOTF

Bench + spectroradiometer

Rendering of details in dark tones, midtones, and highlights should be as close as possible to that of the target reference screen, and maintained in indoor conditions.

Color

Bench + spectroradiometer

Color must be as close as possible to the target reference screen and maintained in indoor conditions.

APL

Bench + spectroradiometer

Brightness should not change with APL

Frame drops

Compact camera

Tested at 0 lux, absolute number of stutter indicators (white) and frame drops (black) between 0 and 100 for a 32-second clip

Motion blur

The smoother the image, the better

Judder Compact camera

Video content evaluation at 24, 30, and 60 fps

We run perceptual testing with a Sony reference monitor.

The video part of our testing also evaluates motion and how well a display handles moving content. The left-hand image below shows our setup for testing motion blur, and the middle and right-hand images show sample test output.

We evaluate motion blur perceptually by looking at frame duplications. The image below on the left shows the setup we used to take the images in the center and the right to illustrate what we evaluated perceptually. (We did not base any of our test results on these pictures.)

Setup for taking a photo of motion blur to use as an illustration
Illustration of device output showing duplications
Illustration of device output showing better control of blur
Photos for illustration only

Other motion phenomena we test for are stutter and frame drops. The photo on the left shows our stutter/frame drop testing setup; the GIF on the right illustrates the test video output of a white rectangle that is lit in successive frames.

Stutter and frame drop testing setup
GIF illustrating video test for stutter and frame drops

In the illustrative images below, a black or dark gray rectangle indicates a frame drop (that is, when the display fails to display a frame properly before directly moving to the next one), and a white rectangle indicates stutter (that is, the display displays a frame twice before moving to the next one).

Illustration of a device showing almost no measured stutter or frame drops
Illustration of a device showing a lot of measured stutter and frame drops
Photos  for illustration only

Touch

As shown below, we have a set of high-end measuring instruments for performing touch analyses, including a robot that simulates human gestures (tap, zoom, and scroll) on a touchscreen with a precision of 0.05 mm at 1.5 m/s. In addition, we use a high-speed Phantom camera that records 1440 images per second for slow-motion capture of each frame on a smartphone display.

Touch robot
High-speed camera filming robot testing touch
Average Touch Response Time iPhone 15 Pro Max
67 ms
Fast
Good
Bad
Slow
Apple iPhone 15 Pro Max
Google Pixel 8 Pro
Samsung Galaxy S24 Ultra
Touch To Display response time
This response time test precisely evaluates the time elapsed between a single touch of the robot on the screen and the displayed action. This test is applied to activities that require high reactivity, such as gaming.

In the video below, you can see a meaningful example of the results we obtain with our setup for touch response time measurement. In our gaming use case, the device on the left reacts three times faster than the device on the right, with response times of 3 ms and 10 ms, respectively.

Robotic touch-testing of two devices (DXOMARK gaming use case)

In the example below showing smoothness, we observed during testing that the device on the left is not smooth and that the one on the right is very smooth when scrolling horizontally in a smartphone’s gallery app. The illustrations accurately depicts the phenomenon: On the left, we see a few sporadically spaced images, while there are many more evenly spaced images on the right.

Illustration of test output indicative of choppy scrolling
Illustration of test output indicative of smooth scrolling
Photos  for illustration only
Touch

Tested in indoor conditions (300–830lux)

Sub-attribute Equipment Remarks
Response time Touch robot & high-speed camera Touch response time measured in gaming use case
 Accuracy Touch robot & high-speed camera; perceptual testing on a gaming app Accuracy error and repeatability measurement on all areas of the screen (including edges)
Smoothness Perceptual testing navigating in photo gallery app and on web pages The smoother the better

 

Eye Comfort Label

DXOMARK’s detailed measurements in the Display protocol can help determine whether the device is able to provide a comfortable viewing experience. We have extracted the key metrics of our protocol that we think are most important in determining whether the display experience is easy on the eyes in order to create the DXOMARK Eye Comfort Label.

Backed by solid measurements and transparency on the requirements necessary to pass, our Eye Comfort label aims to be relevant to users and helpful to manufacturers in improving their products.

Eye Comfort Label

A smartphone that passes all four criteria qualifies for the DXOMARK Eye Comfort label, which will appear on the device’s product review as well in the detailed test results on dxomark.com.

Let’s go into more detail on each of the four criteria that a device must pass in order to qualify for the label and the DXOMARK measurements behind them.

Temporal Light Artifacts 
We explained earlier how we measure for flicker, which can be a factor in viewing comfort.

Our measurements look at the frequency of the light output oscillations over time and the degree of modulation restricted to under 90 Hz frequency. What interests us is the frequency point at which the modulation peaks. This peak indicates to us the Pulse Width Modulation.

We measure the flicker behavior of a smartphone and then apply our collected measurements to the flicker perception metric, which is currently used a standard to determine the relative sensitivity to flicker.1

For a device to pass this criterium in our Eye Comfort label, the flicker perception metric must measure less than 1 in anti-flicker mode or default mode, meaning the probability of perception is less than 50%.

Brightness levels 
To pass this criterium, the device first must have an auto-brightness feature. We want to be sure that the device can strictly manage the amount of luminance (or brightness) it emits when activated in the dark or low light to avoid blinding the user.

The measurements come from our  “luminance under various lighting conditions”  testing done in the dark (0 lux).

For a device to pass, the screen’s luminance should be able to adjust in default as well as manual modes to 2 nits (or 2 candelas per square meter) or less of luminance.

Blue-light filtering
Our studies have shown that consumers are growing more aware of and concerned about the effects that blue light from phone or computer screens can have on their vision and their sleep cycle. Research shows that artificial light, and in particular exposure to blue light at night, can disrupt the human circadian rhythm by inhibiting the production of melatonin, the hormone that helps us fall asleep. Despite the numerous studies done on what affects the circadian rhythm, there is still no scientific or medical consensus on the levels of blue light that could disrupt the sleep cycle.

For the Eye Comfort label, we extract the protocol’s blue-light filtering measurements done with night mode on and off to determine the possible impact on the human sleep cycle with metrics that are based on recent scientific research2.

By measuring a light source’s influences on the circadian rhythm and on vision, we can calculate a device’s Circadian Action Factor to determine its effect on our internal clock.

To meet the criteria for our label, the smartphone screen must have a circadian action factor equal or less than 0.65, with the default blue-light mode on. The 0.65 level corresponds to the light from a neutral white LED lamp, the kind you might use at home. Our position is that the luminance from the smartphone display should not be any more disruptive to your internal body clock than the light in your home.

• Color consistency

Color consistency looks specifically at the impact that the blue-light filtering mode has on color performance. It is a given that colors will shift when the blue light filter is activated, which also shifts the screen’s white point. A well-tuned device will minimize the effects of the color shift.

To meet our criteria for the label, the device must maintain 95% or more of the wide P3 color space gamut when the blue-light filter is activated, after a white point correction toward D65 (Bradford transform).

Conclusion

In our tests, we perform more than 600 measurements and 35 hours of perceptual analysis. These measurements are conducted in dark environments as well as in challenging indoor and outdoor lighting environments to imitate the end-user experience.

While having the latest high-quality panel on your smartphone is a good start toward reaching good display quality, it’s not enough. As we said earlier, display performance quality does not solely depend on hardware specifications, but also on the software strategy choices that manufacturers make to try to optimize end-user comfort across different use cases.

We hope this article has given you a more detailed idea about some of the scientific equipment and methods we use to test the most important characteristics of your smartphone display.

1    1A flicker perception metric, D Bodington BS, A Bierman MS and N Narendran PhD Lighting Research Center, Rensselaer Polytechnic Institute, Troy, NY, USA
2    1

Oh, J., Yoo, H., Park, H. et al. Analysis of circadian properties and healthy levels of blue light from smartphones at night. Sci Rep 5, 11325 (2015). https://doi.org/10.1038/srep11325 

The post A closer look at the DXOMARK Display protocol appeared first on DXOMARK.

]]>
https://www.dxomark.com/a-closer-look-at-the-dxomark-display-protocol/feed/ 0 A closer look at the DXOMARK Display protocol - DXOMARK Explore our detailed analysis of 600+ measurements and 35 hours of perceptual testing to improve your display quality across different lighting scenarios. DXOMARK Display protocol DXOMARK-Display-Lab-1536×1024 objective tools 5.JPG objective-tools-displayv2-3 objective tools 7 Corp_Shooting_SalesAndMarketing_DisplaySetup_07122020 (28) Corp_Shooting_SalesAndMarketing_DisplaySetup_07122020 (36) DISPLAY_CLOSER LOOK V1.5 dome_system_snap display_chart_mosaic perceptual-tools-displayv2 image (3) Conoscope iPhone12Pro_Brightness_HDR_cr1 iPhone12Pro_Brightness_HDR_cr2 iPhone12Pro_Brightness_HDR_cr3 setup-motion-blur Motionblur_S20_Ultra Motionblur_iPhone12Pro Image_FrameDrop_setup_cr-e1607951921193 Frame drop obj Framedrop_30FPS_iPhone11ProMax Framedrop_30FPS_OppoFindX2 Corp_Shooting_SalesAndMarketing_DisplaySetup_07122020 (14) (copie) Corp_Shooting_SalesAndMarketing_DisplaySetup_07122020 (10) Touch_smoothness_gallery_ROG3 Touch_smoothness_gallery_Note20_cr-768×1492 Eye Comfort Picto EN
DXOMARK Decodes: An introduction to AI in smartphone cameras https://www.dxomark.com/dxomark-decodes-an-introduction-to-ai-in-smartphone-cameras/ https://www.dxomark.com/dxomark-decodes-an-introduction-to-ai-in-smartphone-cameras/#respond Fri, 08 Mar 2024 11:09:15 +0000 https://www.dxomark.com/?p=163681 DXOMARK’s Decodes series aims to explain concepts or dispel myths related to technology, particularly in smartphones and other consumer electronics.  In this edition, we address the current buzz around artificial intelligence and briefly look at one way that AI is being used in smartphone cameras. We’ll continue to explore other ways in which AI is [...]

The post DXOMARK Decodes: An introduction to AI in smartphone cameras appeared first on DXOMARK.

]]>
DXOMARK’s Decodes series aims to explain concepts or dispel myths related to technology, particularly in smartphones and other consumer electronics.  In this edition, we address the current buzz around artificial intelligence and briefly look at one way that AI is being used in smartphone cameras. We’ll continue to explore other ways in which AI is used in smartphone cameras and image quality assessment in future articles.


Smartphone photography has always had an element of magic about it. We just point and tap our devices in the hopes of capturing a moment or the scenery, no matter how challenging the situation might be. Smartphone cameras are now very sophisticated in the way they can make almost any image or video come out with correct exposure, good details, and great color, helping to overcome the compact device’s optical limitations.

Recently, we saw the importance that smartphone makers are placing on using artificial intelligence in the latest flagships to improve the user experience, particularly the image-taking experience. We saw some of the latest  AI camera technologies with the release of Samsung’s Galaxy S24 Ultra, for example, which emphasized a range of AI photography tools that can guide the image-taking process from “preview to post,” including editing capabilities that allow users to resize or move objects or subjects after capturing the image. The latest Google Pixel phones also use AI technologies that allow users to reimagine or fix their photos with features like “Best Take” or “Magic Eraser,”  which blend or change elements such as facial expressions, as well as erase unwanted elements from a photo.

But while smartphones put a camera in everybody’s hands, most smartphone users are not photographers, and many devices do not even offer options to adjust certain photographic parameters, in many cases thanks to AI. As AI makes its way into many aspects of our lives, let’s briefly explore what is AI and how it is being applied to smartphone cameras.

What do we mean by AI?

AI is a fast-developing field of computer science that offers the possibility of solving many problems by perceiving, learning, and reasoning, to intelligently search through many possible solutions. AI has given computer systems the ability to make decisions and to take action on their own depending on their environment and the tasks they need to achieve. With AI, computer systems are performing tasks that normally would have required some degree of human intelligence, for example, from driving a car to taking pictures.  It’s no wonder that companies worldwide are using AI to improve their products, services, and the user experience.

We often hear the terms Artificial Intelligence, machine learning and deep learning bandied about interchangeably. But the three terms have some distinctive differences in how they process data.

Artificial Intelligence is a general term to describe the ability of a computer or robot to make decisions autonomously.  Within AI is a subfield called machine learning, which contains the algorithms that integrate information from empirical data. The programmer, after coding the algorithm, executes it on a set of data that is used for “training”. The algorithm will look for patterns in the data that allow it to make predictions on a given task. Once new data comes in, the algorithm can search for the same patterns and make the same kind of predictions on the new data. It is the algorithms that learn to adapt to the new data.

A subset of machine learning is called deep learning, which processes an even larger range of complex data in a more sophisticated way, through multiple layers called neural networks to achieve even more precise results and predictions.
Deep learning-based models, for example, are widely used now in image segmentation on X-rays for medical applications, in satellite imaging, and in self-driving cars.

Smartphone photography is also benefitting from deep learning models as cameras are programmed to learn how to produce and create a perfect image.

How AI is used in smartphone photography

You might not realize it, but even before you press the shutter button on your smartphone to take a photo or video, your personal pocket photographer has already begun working on identifying the scene and in some cases differentiating the objects and setting the parameters to be ready to produce an image that will hopefully be pleasing to you.

Smartphone photography is a good example of AI at work because the images are already a result of computations that rely on certain AI elements such as computer vision and algorithms to capture and process images.

In contrast, a traditional DSLR camera provides a photographer with a wide range of parameters for creative image-taking. The way these parameters are set depends on:

–identifying the scene (portrait, natural scene, food, etc.) that is to be photographed and the semantic content of the scene, meaning what should the viewer focus on in the image
–the properties of the scene such as the amount of light, distance to the target, etc

But most smartphone cameras do not even offer the option to adjust these parameters.

Scene detection

The ability of a machine to learn depends on the quality of the data it processes. Using computer-vision algorithms, which in itself is a form of AI, a smartphone camera needs to be able to correctly identify the scene by extracting information and insights from the images and videos in order to adapt its treatment.

The following examples are simple segmentations, in which the object is separated from the background and categorized.

What allows the computer or device to extract this information is called a neural network. With neural networks, computers can then distinguish and recognize images in the same way that humans do.

There are many different types of neural networks, but the main machine-learning model used for images is known as a Convolutional Neural Network (CNN), which puts an image through filters or layers that activate certain features from the photo. This then allows the scene and objects in the scene to be identified and classified. CNNs are used for semantic segmentation of an image, in which each pixel in an image is categorized into a class or object.
Semantic segmentation and image labeling, however, are the most challenging tasks for computer vision.

For cameras to be able to learn to “see” scenes and objects like humans do depends on extensive databases of meticulously annotated and labeled images. Image labeling is still a task that requires human input, and many companies create and sell massive databases of labeled photos that are then used to create machine learning models that can be adapted for a wide range of products and specific applications.

The technology has advanced very quickly, and some chipmakers are already incorporating semantic segmentation into their latest chips so that the camera is aware and “understands” what it is seeing as it takes the photo or video to optimize it. This is known as real-time semantic segmentation or content-aware image processing. Many of these technologies are thanks to improved processing power from the chipsets, which are now integrating many of these AI technologies to optimize photo- and video-taking. By having the capability to separate the regions of an image in real time, certain types of objects in the image can be optimized for qualities such as texture and color. We’ll take a closer look at all the other ways that AI plays a role in image processing in another article.

Now let’s take a look at a real-life example of AI at work in a smartphone camera.   The example below reveals how the camera is making decisions and taking action on its own based on what it is identified in the scene. You’ll see how the camera adjusts the image as it goes from identifying the scene (is it a natural scene or portrait)  to detecting a face and then adjusting the parameters to provide a correct exposure for a portrait — the target (the face).

Photo 1
Photo 2
Photo 3

In Photo 1 on the left,  the camera identifies a natural landscape scene and exposes it, but at Photo 2, when the subject turns around, we see that the camera still has not fully identified the face, but by Photo 3, the camera has identified a face in the scene and has taken action to focus on it and expose it properly at the expense of the background exposure. In addition to the changed exposure of the background as well as the face when comparing Photo 3 with Photo 1, we also see that the subject’s white T-shirt has lost much of its detail and shading.

While Photo 3 is not ideal in terms of image quality, we can clearly see the camera’s decision-making process to prioritize the portrait for exposure.

Conclusion

As more manufacturers incorporate the “magic” of  AI into their devices, particularly in their camera technology to optimize photos and videos, software tuning becomes more important to get the most out of these AI capabilities.

Through machine learning, smartphone cameras are being trained to identify the scenes more quickly and more accurately in order to adapt the image treatment. Through deep learning and its use of neural networks, particularly the image-specific CNN, smartphone cameras are not only taking photos, but they are also making choices about the parameters once reserved for the photographer.

AI is helping to turn the smartphone camera into the photographer.

We hope this gives you a basic understanding of how AI is already at work in your smartphone camera. We will continue to explore how AI affects other areas of the smartphone experience in future articles. Keep checking  dxomark.com for more Decodes topics.

The post DXOMARK Decodes: An introduction to AI in smartphone cameras appeared first on DXOMARK.

]]>
https://www.dxomark.com/dxomark-decodes-an-introduction-to-ai-in-smartphone-cameras/feed/ 0 AI_terminology_graphic_1 Semantic_Segmentation_Graphic_2 Semantic_Segmentation_Graphic_1 AI_scene_detection_visual AI_scene_detection_visual_2 DECODES Picture1 DECODES Picture2 DECODES Picture3
Smartphone portrait photography and skin-tone rendering study: Results and trends https://www.dxomark.com/smartphone-portraits-skin-tone-rendering/ https://www.dxomark.com/smartphone-portraits-skin-tone-rendering/#respond Wed, 21 Feb 2024 18:15:36 +0000 https://www.dxomark.com/?p=166901 In the summer of 2023, DXOMARK experts conducted their largest study of smartphone portrait photography, focusing on everyday life moments. The study: ● focused on portraits (all varieties of pictures featuring individuals); ● captured 405 scenes with 83 regular consumers as models; ● consisted of a user panel of these models, 30 professional photographers, and [...]

The post Smartphone portrait photography and skin-tone rendering study: Results and trends appeared first on DXOMARK.

]]>
In the summer of 2023, DXOMARK experts conducted their largest study of smartphone portrait photography, focusing on everyday life moments. The study:
● focused on portraits (all varieties of pictures featuring individuals);
● captured 405 scenes with 83 regular consumers as models;
● consisted of a user panel of these models, 30 professional photographers, and 10 DXOMARK image quality experts.

Our goal was to measure user preferences on people’s pictures and identify emerging trends in smartphone portrait photography. In the study, individuals representing a wide range of skin tones were used, which led to a compelling question: Does the perceived quality of images remain consistent across different skin tones?

Read on to learn about our main findings.

4 Key takeaways

1. Today’s best smartphones fail to meet user expectations for portrait rendering in pictures.

2. There are significant differences between smartphones in terms of portrait rendering, resulting in varying levels of user satisfaction.

3. The perceived quality of images does not remain consistent across all skin tones pictured.

4. Smartphones still have room for improvement in achieving satisfying photo rendering in every light condition.

 

Three devices, three renderings

As part of their methodology, DXOMARK experts included in their study the rendering of three premium flagship devices released in late 2022 and 2023, along with those of a professional photographer using a Digital Single-lens Reflex (DSLR) camera. Participants were then asked to identify which image they would not want to post on social media, as a criterion to highlight their level of satisfaction.

The goal of this study was to identify trends in user preferences by obtaining rendering preferences.

The first key finding of the survey, which explained the subsequent results, was the noticeable differences in overall rendering between the three devices. This suggests that each manufacturer has its own distinct visual “signature.”

The manufacturer’s technical choices

We observed significant differences between the photos produced by the devices, even in very basic use cases. This resulted in different Satisfaction Indexes. This was true for both typical outdoor and indoor scenes.

 

“Satisfaction levels vary widely between the photos, hence the importance of studying trends in terms of user preferences. It also underscores the challenge manufacturers face in creating a unique style while ensuring they deliver a rendering that appeals to the majority of users.”
Hervé Macudzinski, Image Science Director, DXOMARK

 

Here, we observed notable differences in overall brightness, skin color, color rendering, and face exposure. Even in less technically demanding scenes, the manufacturer’s signature clearly influenced the results. Each device also scored a unique Satisfaction Index, underlining their distinct characteristics.

What is the Satisfaction Index?
The Satisfaction Index, developed by DXOMARK experts, is a metric that quantifies user preferences and measures the level of satisfaction of respondents. It takes into account several factors, including:
● Just Objectionable Difference (JOD);
● Image Rejection (%);
● Mean Rejection (%).The Satisfaction Index is scored on a scale of 0 to 100, where:
● 0 indicates that the image was rejected by more than 50% of respondents;
● 100 indicates no rejection at all.

 

The verdict

The best smartphones are failing to meet user expectations for portrait pictures.

A total of 1,620 photos were taken for this study, and each photo was assigned a Satisfaction Index. A score of 70 or more guarantees a high JOD score and a low rejection rate, indicating that the photo is generally satisfactory to panelists. An index below 70/100 indicates that the photo may not meet user expectations.

The overall Satisfaction Index for all the portrait pictures reviewed was 61.

Chart_Satisfaction Index for Smartphones

Interestingly, users had higher expectations for portrait photos in all conditions:

Indoors: of the following indoor shots, respondents preferred the brighter picture taken with Device A (after the photographer’s rendering.) Here, the Satisfaction Index was at 60.

At night: users expect people to stand out. The satisfaction index was 57.

Low-light conditions are not ideal for photography. Even for professional photographers, it was a significant challenge to meet the expectations of the panelists.

A big challenge for smartphone cameras: The backlit scene

 

Exposure has the greatest impact on satisfaction

The portraits taken with a professional camera received an impressive overall average score of 77. By comparing the professional camera results to the smartphone results, we were able to better understand the trends in user preferences and identify areas of dissatisfaction.

Respondents often expressed dissatisfaction with the overall color rendering and incorrect exposure of faces.

Here are some other takeaways:
● users have strong expectations on level of brightness, skin color, and overall color rendering (as shown in the SDR scenes);
● users strongly penalized underexposure of the face, which had a significant impact on overall satisfaction (as shown in the indoor scenes);
● the most saturated or brightest image was not necessarily the preferred option.

In low-light situations, users expect the resulting photos to be similar to typical indoor photos, often unaware of the impact of low light on photography. This contributes to a greater disparity in user perception.

However, exposure remains a top priority for users and has a significant impact on their satisfaction.

When shooting at night, users want to maintain the ambiance of the scene while ensuring the subject’s face is properly exposed. This can be a challenge even with a professional camera, especially when the subject has a darker skin tone.

 

“People demonstrate remarkably specific preferences and a keen eye for details. In that context, standard consumer insights (ranking, device comparisons, etc.) are not enough to understand them.”
Hervé Macudzinski, Image Science Director, DXOMARK

 

The perceived quality of images varies across all skin tones pictured

Photos of people are captured in various conditions, from indoors to outdoors, day or night, sunny or backlit. Despite its popularity, this type of photography is technically challenging for smartphone cameras.
Through our rigorous scientific approach, this new survey provides insight into the factors that influence respondents’ choices. One of these factors is the rendering of the model’s skin tones, revealing image quality issues.

Satisfaction varies depending on age

A total of 123 panelists participated in the survey, divided into subgroups based on gender and age. This allowed us to gain first consumer insights.

Younger consumers (under 40) were more selective when rating portrait pictures than those over 40. They also had a lower overall Satisfaction Index. In particular, there were significant differences in scenes with higher technical complexity, such as low-light, HDR, and backlit scenes.

This suggests that younger people are more sensitive to image quality issues than older people. Face exposure emerged as an area of particular concern for young people.

Given their discerning nature and high demands, satisfying the younger demographic is a key for manufacturers.

 

Satisfaction varies depending on gender

We also noticed a significant discrepancy in the Satisfaction Index between male and female panelists.

In all conditions, women had higher expectations of image quality than men. The more challenging the conditions in a given scene, the more degraded the image quality, and the women were more adept at recognizing this.

This difference in expectations between genders proved to be the most substantial compared to other subgroups, such as age, cultural heritage, or skin tone.

Satisfaction of every respondent varies depending on the skin tone of the model

A total of 83 models participated in the study, representing a wide range of skin tones.

As previously discussed in our methodology article, we used the Fitzpatrick scale, a widely used classification system for categorizing different skin tones. However, it is important to note the limitations of this scale, as it may not encompass the full spectrum of skin tones.

Still, the survey results clearly show that the presence of people with darker skin tones in photographs consistently correlates with lower levels of satisfaction. Crucially, this finding is not an issue of representation, as it applies to all respondents, including models, photographers and DXOMARK experts. Everyone believes that these pictures are less effective and that smartphones deliver less favorable renderings when the skin tone deviates from white.

Satisfaction Index per skin tone type

Hence, the problem lies in the inadequate rendering of darker skin tones. Other factors contributing to lower satisfaction include incorrect white balance and poor overall exposure for the same scene with a darker-skinned model.
The satisfaction scores declined as skin tone darkens, suggesting that the problem is not exclusively related to darker skin tones, but to skin that is perceived as “not fair”. While light-skinned models are consistently rendered with similar image quality across devices, rendering challenges arise with any non fair skin tone.

Tuning issues are more visible on deeper skin tones

The two photos on the left were taken with the same device. We can see that with a darker skin tone in the same scene, more minor issues were detected that affected the overall Satisfaction Index. An example of this is the underexposure observed in the second picture.

However, the device used for the two photos on the right delivered equal satisfaction for both skin tones. The device on the left failed to provide satisfactory results for the darker skin tone. This is due to lower exposure settings and a lack of adaptation to the skin tone of the person in the scene.

These examples highlight the challenge for smartphones, which must adapt and use different tuning/settings to achieve optimal renderings for all individuals.

A reminder

This survey was not designed to assess device quality, but rather user rendering preferences. The preferred rendering was not always the most “natural” or the one that “accurately rendered” skin tone.

 

Room for improvement

Satisfaction with outdoor portrait photography was high, yet not flawless. As previously explained, it depends on different criteria, like lighting condition, of course, but also tuning choices by manufacturers.

To provide a basis for comparison, we included pictures taken and edited by professional photographers using DSLRs in the survey. These images represent what an “ideal target image” might look like and what would be considered perfect from the photographer’s point of view. The average Satisfaction Index for the professional pictures was 74. The lowest index, observed in low-light conditions, was 71.

Chart 1_Ultra Premium vs Photographer Rendering

Of the smartphones assessed, only one device achieved an overall score of 71, with a high Satisfaction Index in all lighting conditions. The other two devices received significantly lower scores from our respondents.

Is the smartphone camera just a tool for capturing memories? Far from it. With its embedded technology, such as computational imaging capabilities, the smartphone camera plays the role of the photographer by making decisions on behalf of the user.

Through technological advancements over the years, smartphone cameras have made significant progress in bridging the gap with DSLRs in many ways.
The results indicate that there is a significant need for improvement in low-light, night, and backlit portrait photography, as users were highly dissatisfied with the results. For example, when shooting at night, users are unwilling to compromise between capturing the ambience of the nighttime setting and ensuring that the subject’s face is well exposed.

Photographer satisfaction: A guide to tomorrow’s consumer demands?

Thirty photographers and 10 DXOMARK image quality experts participated in this survey. Although one smartphone received high satisfaction scores, these participants were able to distinguish its photos from those of the photographer rendering.

The photographers’ Satisfaction Index was significantly lower, because they knew exactly what they were looking for in different situations, leading them to be more demanding and reject more pictures than consumers. Also, they have the ability to detect subtle issues, which is different from their expectations as a signature and aesthetic goal.

The professionals had high expectations for all types of scenes and lighting conditions. And their top reasons for rejecting photos were exposure and color rendering.

The disparity between smartphones and photographer renderings was even more pronounced when it came to certain lighting conditions, such as low light and night photography. We found that the more challenging the conditions, the greater the preference for professional rendering.

In summary, the general preference for photographer rendering provides valuable insight into the ideal target rendering that manufacturers should strive to achieve. This knowledge can guide their efforts to meet and exceed consumer expectations in the future.

The post Smartphone portrait photography and skin-tone rendering study: Results and trends appeared first on DXOMARK.

]]>
https://www.dxomark.com/smartphone-portraits-skin-tone-rendering/feed/ 0 Diapositive1 Diapositive2 Diapositive3 Diapositive4 Satisfaction Index vs Age Satisfaction Index per skin tone type Diapositive7 Diapositive8 Satisfaction Index vs Expertise
Smartphone portrait photography and skin-tone rendering: How did we measure user preferences? https://www.dxomark.com/dxomark-methodology-skin-tone/ https://www.dxomark.com/dxomark-methodology-skin-tone/#respond Fri, 02 Feb 2024 14:37:43 +0000 https://www.dxomark.com/?p=164378 Portraits are the most valued and popular type of photography, yet capturing great portraits remains technically challenging for smartphone cameras. The specific issue of achieving accurate skin tones, for instance, has received considerable attention from researchers and manufacturers alike. After taking into account all previous work on this topic, DXOMARK’s image quality experts conducted their [...]

The post Smartphone portrait photography and skin-tone rendering: How did we measure user preferences? appeared first on DXOMARK.

]]>
Portraits are the most valued and popular type of photography, yet capturing great portraits remains technically challenging for smartphone cameras. The specific issue of achieving accurate skin tones, for instance, has received considerable attention from researchers and manufacturers alike.

After taking into account all previous work on this topic, DXOMARK’s image quality experts conducted their own extensive qualitative study aiming to:

● identify trends in users’ preferences regarding portraits (pictures of a single person as well as a group of people)
● identify elements of satisfaction and frustration;
● explore the technical challenges.

To achieve this, we designed a unique methodology that allowed us to gather detailed insights from the most common use cases, environments, and conditions. This scientific methodology was used first for our European study in Paris, but it could be easily applied to other regions of the world, to other areas of study as well as other electronic products (laptops, for example).

We present it to you here.

The question of perception

Perception is a challenge when it comes to portrait quality evaluation. Indeed, people’s preferences when it comes to photos are often tied to their memories and familiarity with the subject. Hence, we hold our portraits and those of other people to different standards.

This begs the question: which qualities can most people agree a “good portrait” should have?

To find answers, DXOMARK’s image quality experts conducted this new qualitative study aiming to identify the reasons of frustration and the key pain points in smartphone portrait photography.

Understanding and measuring user preferences

The methodology our experts built allowed them to achieve two main objectives.

Understanding user preferences

This requires a comprehensive analysis that encompasses all smartphone camera uses (meaning types of portraits here), and the variety of conditions they take place in. Of course, each usage presents unique technological challenges.
Only by fully understanding each of these uses and the technical difficulties they pose could we simulate highly accurate test conditions.

Measuring user preferences

This is a critical component of this analysis. The scale should represent the perceived quality of each type of portrait according to an individual or a test group.

Conducting this analysis with a large group and creating a test scenario that closely resembles real-world usage was key to the success of this study.

Analyzing portrait preferences in relation to skin-tone rendering

We took on the question of skin tone rendering quality perception in smartphone portrait photography.
This required

● gathering a panel of people representative of all skin tones but also diverse cultural backgrounds, age groups, and gender inclusive
● developing a relevant shooting plan

A shooting plan includes a set of diverse photos used to identify users’ portrait preferences. We designed such a shooting plan for many years. Quantitative surveys are very useful per region to understand the preferred use cases. In the context of this study, a “relevant” plan is one that covers most use cases.

The shooting plan: A key component to anchor users’ insights in real life

The technical framework

The photographer

The shots had to be taken by professional photographers. Why? Because we needed perfectly comparable shots. The challenge lay in accurately capturing the same scenes with different devices.

The devices

Our goal was not to compare devices or evaluate their performance but rather to gain insight into user preferences regarding the top offerings on the market.

Therefore, we used the most advanced smartphones, the flagship devices, available at the time of the study, as well as a professional digital camera that allowed us to look at what the future of smartphone photography may hold.

⚠️ For each scene and type of portrait, four different devices were used: three smartphones and a professional camera.
These allow us to identify the main trends in preferences, each of which can then be studied in more depth.

Scenes and stages

The location

The shooting plan was tailored to the specifics of the geographical area under study. In our case, it was Paris. Our professional photographers curated a plan that embodied the look, feel and essence of a European way of life. Our ultimate goal was to capture images that would resonate with the European panel.

💡This type of study can be replicated anywhere in the world, with local photographers capturing and showcasing their respective regions’ unique customs and traditions.

The stages

A stage refers to a combination of:
● places
● lighting conditions
● framing (scene composition)
● number and position of respondents
A total of 180 stages were shot, with one, two, three and four models in each.

The scenes

A scene is a specific combination of a stage and models placed within it. The model is the variable between different scenes within the same stage.

Skin Tone Set 1
Skin Tone Set 1
Skin Tone Set 1

 

Skin Tone Set 2
Skin Tone Set 2
Skin Tone Set 2

 

Skin Tone Set 3
Skin Tone Set 3
Skin Tone Set 3

Our shooting plan was comprehensive to include all types of scenes. The were partitioned in the following way (the number represents the number of scenes shot for each condition) :


 

“We understand that certain test conditions, especially night scenes, can be more challenging than others. That’s why we included a large variety of conditions during testing.
We also enriched the shooting plan with lab scenes, which feature models in front of a white background, under neutral and consistent lighting. This controlled environment allowed us to focus only on the rendering of the portrait and its reception by the panel, without the scene, light or conditions affecting the results.”

Hervé Macudzinski, Image Science Director, DXOMARK

 

The light conditions

We shot a total of 1,620 pictures in HDR, SDR and backlit conditions, partitioned as shown in the following chart:

● backlit (very challenging light conditions)
● SDR or Standard Dynamic Range (limited range of brightness and colors)
● HDR or High Dynamic Range (high range of brightness and colors)

Survey respondents

The respondents

We put together a panel of European people representing all skin tones as well as a variety of cultural backgrounds, genders, and ages. A total of 123 people participated in the survey, with 83 models/respondents photographed in 405 scenes, 30 professional photographers, and 10 DXOMARK image quality experts, making this one of the largest studies of its kind.

Both genders were almost equally represented, with the panel made up of:

● 52% women
● 48% men

Every adult age group was included as well:

● 18 to 30 years old (25%)
● 30 to 40 years old (29%)
● 40 to 50 years old (19%)
● 50 to 60 years old (15%)
● Over 60 years old (12%)

To select and classify the respondents based on their skin tone, we used the Fitzpatrick scale, a tool used to determine how different skin types react to the sun. The scale organizes skin types into 6 distinct categories, all included in our study:

● Type I, “light” (12%)
● Type II, “fair”  (30%)
● Type III, “medium” (23%)
● Type IV, “olive” (2 %)
● Type V, “brown” (8%)
● Type VI, “deep” (4%)

About the Fitzpatrick scale
Originally developed for medical purposes, the Fitzpatrick scale is commonly used for classifying skin across various industries. While being a robust and widely used tool, numerous scientific publications have pointed out its limitations. For example, it does not take into account the difference between skin type & skin tone. The paper “Beyond Skin Tone: A Multidimensional Measure of Apparent Skin Color”, for instance, highlights the need for a more comprehensive measure of skin color. The Fitzpatrick scale also relies on self-reporting, which can bring with it unintentional bias”.Fitzpatrick Scale
“We adopted the Fitzpatrick scale as it is widely used to classify skin tones. However, it may not provide enough granularity for medium to dark skin tones, resulting in potential classification inaccuracies. Therefore, we are exploring the possibility of using alternative scales such as the Monk scale or the Individual Typology Angle (ITA) in future rounds.”
Benoît Pochon, Image Science Director,  DXOMARK

Unveiling the Satisfaction Index

The DXOMARK Satisfaction Index is a numerical representation of user preferences. It is a combination of two distinct aspects that we measured in this study: One measures preference and the other measures rejection. By combining these two results, we were able to gather insights not only about user preferences but to quantify them as well.

Participants took all the tests under controlled viewing conditions and were unaware of the devices that were used to capture the images.

The details of how we created the DXOMARK Satisfaction Index are presented below.

The two-step user survey

Step 1: The best picture

First, participants were presented with only two images, side-by-side, and asked to select the one image that they preferred based on its overall image quality.

Pairwise comparison

In order to quantify the perceived difference in quality our experts used a Just Objectionable Difference (JOD) scale.

Pairwise Comparison and JOD scale
This method allowed our experts to rank the pictures by crossing the results of several comparisons. For example, two images were considered to be 1 JOD apart if 75 % of observers found that one had better quality than the other.
Ranking pictures according to a JOD scale requires the use of advanced statistical techniques, in order to ensure enough comparisons are made to converge to a reliable estimate.
Those techniques also allow experts to acquire more information. For instance, a confidence interval for the JOD scores of a given group can be determined using a statistical method known as bootstrapping, which relies on repeated resampling of a set of data in order to accurately estimate a result for a particular group.

At the end of the survey, for each participant, every image taken with each of our four cameras was given a preference score. We could then aggregate those results to estimate a preference score for groups of participants.

Step 2: Social media-worthy picture

In the second part, participants were presented with four images of the same scene taken with different cameras (one with a professional camera and three with smartphones), and then asked to identify which image they would not want to post on social media, effectively, which image or images they would reject. The goal of this question was to refine our preference analysis.

Relative rejection

 

Why social media?
We wanted to measure acceptability. Our question was: “what do respondents consider to be the minimum level of quality acceptable?”. In that regard, social media provides a criterion that speaks to everyone yet remains significant. If we had simply asked people which photo they would keep, they might have chosen a lower-quality option because of their sentimental attachment to it.
“We needed a criterion for evaluating the quality of photos. Social media suitability proved to be the ideal one.”

Hervé Macudzinski, Image Science Director,  DXOMARK

 

Calculating the Satisfaction Index

After conducting this two-step survey, we collected the following information for each scene:

● the overall rejection rate for all respondents
● the rejection rate for the group being studied
● the JOD scale

With the collected data, we used the formula below to calculate the Satisfaction Index score per picture, and we scaled the result so that it would fit within a range of 0 to 100.METHODOLOGY_Satisfaction Index formula

Taking into account the confidence interval for each portion of the index, we could also determine a confidence interval for the overall Satisfaction Index.

The Satisfaction Index falling below 70/100 meant the photo may not meet user expectations.

Conversely, examining the characteristics of photos with scores above 70 helped us identify the prevailing preferences within a given use case. This understanding allowed us to establish the commonalities between satisfying renderings, as well as their technical characteristics.

Why use a Satisfaction Index?

The Satisfaction Index is a homogeneous and comparable score that can be used to compare participants, groups of participants or scenes.

By examining the Satisfaction Index for each individual, we can gain valuable insight into their ability to identify image quality issues and their preferences compared to other participants or groups, and the trends in their preferences.

Closing thoughts and considerations

This study analyzes the impact of shooting conditions and camera choice on image quality perception, but it also answers other related questions that provide us with consumer insights:

● Are smartphone users currently satisfied with the quality of their portraits?
● Do all high-end smartphones provide the same level of satisfaction in this respect?
● Do professional photographers produce more satisfying images overall compared to smartphones?
● If so, which gaps in quality can non-photographer users perceive?
● Does age influence quality perception?
● Does gender affect quality perception?
● Does the perceived quality of pictures remain the same regardless of the model’s skin tone?
● What other dimensions influence the respondents’ choices and their perception of image quality?

The complete study by DXOMARK Insights highlights the technical parameters that are key to ensuring high-quality portraits and user satisfaction. To manufacturers, that is vital information.

Stay tuned for the first results, coming soon!

The post Smartphone portrait photography and skin-tone rendering: How did we measure user preferences? appeared first on DXOMARK.

]]>
https://www.dxomark.com/dxomark-methodology-skin-tone/feed/ 0 Set 1_3111_1 Set 1_3111_2 Set 1_3111_3 Set 2_1145_1 Set 2_1145_2 Set 2_1145_3 Set 3_0371_1 Set 3_0371_2 Set 3_0371_3 A variery of testing conditions Lighting conditons Skin Tone methodology FITZPATRICK SCALE Pairwise_SKIN-TONE-METHODOLOGY Skin-tone-methodology-Rejection Satisfaction Index formula
Speakerphones: See which ones performed best in our tests https://www.dxomark.com/speakerphones-see-which-ones-performed-best-in-our-tests/ https://www.dxomark.com/speakerphones-see-which-ones-performed-best-in-our-tests/#respond Wed, 10 Jan 2024 09:02:11 +0000 https://www.dxomark.com/?p=163166 When DXOMARK introduced its Laptop testing protocol in June 2023, the main focus was to assess the laptop’s performance in two specific use cases: videoconferencing and multimedia playback. During our laptop tests, we identified several pain points in users’ audio experience. We also recognized that many consumers were using speakerphones both at work and at [...]

The post Speakerphones: See which ones performed best in our tests appeared first on DXOMARK.

]]>
When DXOMARK introduced its Laptop testing protocol in June 2023, the main focus was to assess the laptop’s performance in two specific use cases: videoconferencing and multimedia playback. During our laptop tests, we identified several pain points in users’ audio experience. We also recognized that many consumers were using speakerphones both at work and at home to enhance their laptop’s audio capabilities, whether to facilitate meetings with multiple people or to just listen to music or watch movies. So as a complement to our laptop testing, we decided to run audio evaluations on several speakerphones to see how well they performed in videoconferencing and multimedia playback situations.

Testing methodology

Our methodology in testing speakerphones was the same as the one we use when testing laptop audio performance. We combined objective measurements and perceptual evaluations of all audio attributes (timbre, spatial, dynamics, volume, artifacts), which were performed in our anechoic laboratory and in our simulated and real-life use cases and environments. Because the testing protocol was the same, this makes the speakerphone scores directly comparable with the laptop audio scores. Read more about the details of our laptop testing protocol.

All speakerphones were tested using the same laptop, a Lenovo Thinkpad X1 (Gen 10), which runs on Windows. The selection of speakerphones was based largely on availability and popularity.

Summary of the results

We evaluated nine speakerphones using our laptop audio protocol, and the results are in!

Speakerphone ranking

Two speakerphones came out on top: the Jabra Speak2 75 and the EPOS Expand 40.

The Jabra Speak2 75 earned the top spot in the ranking, with improvements in all audio aspects over the Jabra Speak 750. The Jabra Speak2 75 had the best performance in multimedia playback and videocall capture, making it an excellent choice not only for office or personal video calls but also for listening to some music in between meetings.

Just behind the Jabra Speak2 75 was the EPOS Expand 40. EPOS, which was previously part of Sennheiser Communications, managed an excellent tuning of the capture performance, especially in meetings with multiple people taking part.

Both the Microsoft Modern USB-C Speaker (3rd) and the Poly Sync20 (4th) deserve an honorable mention, being among the most affordable speakerphones tested, while both performing admirably, especially for the capture side on the Microsoft device and the playback side for the Poly device.

Detailed results

Jabra Speak2 75

The Speak2 75 performed very well in multimedia playback performance, proving useful for music and movie use, thanks to warm tonal balance and good clarity. Alongside its playback performance, its microphones produced a very pleasant sonority in general. Voices recorded in our test had nice timbre and sounded natural; the only downside was the monophonic nature of the recordings, which made localizability a bit trickier. The device efficiently reduced background noise, leading to a satisfying SNR, although the digital signal processing (DSP) was less efficient when dealing with reverberant acoustics. An all-round good performance for this speakerphone.

Pros

  • Microphone has an excellent sonority
  • Excellent multimedia playback performance
  • Very efficient background noise reduction

Cons

  • Monophonic recording makes it hard to identify and localize voices
  • SNR not as efficient in reverberating acoustical environments

 


EPOS Expand 40

EPOS Expand 40The Expand 40 had nice, if somewhat dark, sonority during playback. Although not necessarily the best choice for multimedia consumption, voices sounded natural and warm. Capture performance was a bit less satisfying, due to voices sounding muffled, and recordings being monophonic. However, the speakerphone functioned particularly well in duplex speech situations, and its handling of artifacts was satisfactory in both playback and capture.

Pros

  • Great duplex capabilities during video call and meetings
  • Good multimedia playback performance
  • Very few artifacts

Cons

  • Recordings sound muffled
  • Monophonic recording makes it hard to localize voices

 


Microsoft Modern USB-C Speaker

The Microsoft speakerphone provided a good experience overall, especially in capture, where it had a pleasant recording timbre, excellent directivity in the meeting use case, and a satisfying performance in duplex speech situations. The sonority in playback is warm and voices sound good, although they can lack a bit of brilliance and tend to be impaired by either inconsistent noise reduction and/or envelope rendition. The device was also affected by several artifacts in playback and capture alike, but its overall performance was nonetheless satisfactory.

  • Pros

  • Good recording timbre
  • Excellent directivity in meeting use case
  • Great overall performance in duplex speech situations

Cons

  • Inconsistent envelope rendition and/or noise reduction during capture
  • Artifacts impact the quality of playback and recording

Poly Sync20

The Poly Sync20 performed very well across the board. Its playback capabilities made for pleasant and intelligible voice rendition and warm tonal balance – enhanced by a strong presence of low end, which made it especially good for multimedia use. Timbre rendition through its microphones was not as good, as voices tend to sound a bit aggressive, but it had a very effective background noise rendition, and a directivity well suited for meetings.

The device’s microphones did not handle duplex speech particularly well, with quieter voices easily affected by gating.

Pros

  • Very good performance in video call and multimedia playback
  • Microphone provides excellent directivity for meetings
  • SNR is excellent in all capture use cases

Cons

  • Captured voices tend to sound aggressive
  • Duplex speech is affected by strong gating

Beyerdynamic Space

The Beyerdynamic Space has strong playback capabilities, thanks notably to its pleasant and intelligible voice rendition. The speakerphone is also well suited for listening to music, delivering a warm tonal balance and snappy dynamics, especially at loud volumes. You can also use it to watch movies, if you don’t mind the monophonic rendition or the low midrange sounding a bit muddy at times. But all in all, the playback experience is great, and devoid of artifacts.

As for capture, the device seems promising but leaves room for improvement: audio processing is very efficient at reducing background noise, resulting in great SNR; but although the dynamic envelope is still realistic in most use cases, gating can occur on quieter voices due to background noise reduction going a bit overboard. This becomes especially problematic during duplex speech, as volume drops and other artifacts greatly impair intelligibility. Furthermore, the tonal balance delivered by the microphones lacks both bass and treble to some extent.

Pros

  • Very good performance in multimedia playback
  • Great SNR in capture

Cons

  • Captured voices sound thin (poor timbre rendition)
  • Strong gating in duplex speech situations

Logitech Speakerphone P710e

Logitech P710eThe Logitech speakerphone underperformed in our tests, especially in capture, where unpleasant timbre rendered voices as muddy and unclear. SNR was great, however, but DSP was not efficient enough when it came to reverberant acoustics and duplex speech. As for the playback experience, it provided relatively good sonority for video calls and meetings, but not enough for a good multimedia experience.

Pros

  • Great all-round SNR
  • Few to no artifacts

Cons

  • Poor recording timbre
  • Many artifacts during duplex speech

Yamaha YVC-200

Yamaha YVC 200The YVC-200 offers good vocal clarity through its microphone as well as great envelope rendition and intelligibility. Its timbre performance in playback was equally good in video call and meeting use cases, and capture directivity was suitable for both scenarios.

However, background noise was very intrusive in all use cases, to the point where video calls and meetings were less pleasant on the receiving end. The device did not handle duplex speech very well, as both voices were barely intelligible. Finally, music and movies did not sound good on this speakerphone.

Pros

  • Very good intelligibility (voices clear in capture)
  • Good performance in meeting playback

Cons

  • Intrusive background noise in all capture use cases
  • Unintelligible duplex speech
  • Unsuitable for multimedia purposes

Jabra Speak 510

The Jabra 510 does fairly well with video calls, but less so with meetings. While its rendition of speech through its microphone is pleasant and intelligible (thanks to satisfying dynamics), it does not properly capture all voices equally around it, as voices on the sides and to the rear of the speakerphone often sound quieter and more distant than they should. Conversely, this property enhances the experience in one-to-one video calls, as background noise reduction is quite effective, resulting in very good SNR. Duplex speech is nearly impossible, however, as both voices are unintelligible when speaking at the same time. Furthermore, its playback timbre is unsuitable for multimedia content, and distortion is perceptible when listening to music.

Pros

  • Great overall SNR in capture
  • Decent capture dynamics

Cons

  • Not suited for multimedia purposes
  • Very strong gating in duplex use cases

Jabra Speak 750

The Jabra 750 did not perform very well in any of our use cases. Although its microphone directivity was well suited for meetings, its timbre and dynamics performance during capture left much to be desired, with muddy, unclear, and compressed sound that was prone to distortion. The same capture issues were present in video calls, and additionally, microphone directivity was less well adapted. However, background noise reduction was quite effective, and the device handled duplex speech fairly well.

Playback performance was not much better, whether for video calls, meetings, or multimedia usage.

Pros

  • Great SNR across all use cases
  • Excellent directivity in meeting use case

Cons

  • Subpar timbre performance across all capture and playback use cases
  • Not suited for multimedia use

 

The post Speakerphones: See which ones performed best in our tests appeared first on DXOMARK.

]]>
https://www.dxomark.com/speakerphones-see-which-ones-performed-best-in-our-tests/feed/ 0 speakerphones_ranking Jabra-Speak2-75_featured-image-packshot-review-Recovered EPOS-expand-40_featured-image-packshot-review-Recovered Microsoft-Modern-USB-C-featured-image-packshot-review-Recovered Poly-Sync20_featured-image-packshot-review-Recovered Beyerdynamic-Space_featured-image-packshot-review Logitech-Speakerphone-P710e_featured-image-packshot-review Yamaha-YVC_-200_featured-image-packshot-review-Recovered Jabra-Speak-510_featured-image-packshot-review Jabra-Speak-750_featured-image-packshot-review-Recovered
DXOMARK Decodes: A brief look at smartphone charging and compatibility https://www.dxomark.com/dxomark-decodes-a-brief-look-at-smartphone-charging-and-compatibility/ https://www.dxomark.com/dxomark-decodes-a-brief-look-at-smartphone-charging-and-compatibility/#respond Thu, 21 Dec 2023 17:30:57 +0000 https://www.dxomark.com/?p=163449 So you just unpacked your new smartphone from its box, and as is common these days, it didn’t come with a charger, but it came with a USB-C cable, or maybe not even the cable. You begin to wonder whether you can safely use a charger from your other smartphones, or whether you should buy [...]

The post DXOMARK Decodes: A brief look at smartphone charging and compatibility appeared first on DXOMARK.

]]>
So you just unpacked your new smartphone from its box, and as is common these days, it didn’t come with a charger, but it came with a USB-C cable, or maybe not even the cable. You begin to wonder whether you can safely use a charger from your other smartphones, or whether you should buy one from the smartphone brand or just buy an off-brand high-watt charger that advertises super-fast charging times.

Sound familiar? Although you might have lots of cables and chargers that are compatible, you’ll find that smartphone charging compatibility is far more complex than just plugging in any charger and cable that fits. In this article, we’ll try to shed some light on this topic.

Charging compatibility made headlines recently because of a new European Union law that goes into effect in the fall of 2024 that requires electronic devices sold in the EU to adopt the USB-C charging cable and port. But the law goes beyond that.

Manufacturers will also have to provide relevant information about charging performance, for example, power requirements and fast charging support. This information will make it easy to work out if an existing charger will work with your new device and will help to select a new compatible charger if required. This law aims to limit the need to buy new chargers and to allow for the reuse of existing chargers, thus cutting down on waste.

How chargers and smartphones interact

If every smartphone uses the same connector, will charging be the same for every smartphone?  Even though the connector is the same, the way a device will charge will be far from common because of the wide variety of charging protocols that exist.

What is a charging protocol? The charging protocol is a set of rules and specifications chosen by either OEMs or industry organizations like the USB Implementers Forum (USB-IF), which manages the energy delivery from the power source to the rechargeable device. The charging protocol normally specifies the voltage and the current to be adopted during the charging process, as well as the safety features and the communication between devices. Charging protocols are often standardized by industry organizations to ensure the compatibility between devices and chargers.

USB Power Delivery (USB-PD), is a universal charging protocol standardized by the USB-IF. There are four versions of the USB-PD standard, with the latest version 3.1 (adopted in 2021), that offer fast-charging capability all the way up to 240W (currently only for laptops). The same charging protocol supports different connector types, for example: USB type-C, Apple lightning, and others. The advantage is that standard protocols offer more compatibility.

However, some manufacturers have implemented their proprietary charging protocols that allow them to reach high levels of charging power with their own devices, but not with devices from other brands. The EU ruling will require that manufacturers of proprietary charging protocols also support the universal USB-Power Delivery protocol for better inter-compatibility.

Complexities of smartphone charging

Smartphone battery charging is not a linear process in which the charging power remains at a constant level from 0% to 100%.

The following graph illustrates the charging power evolution during the charging process, along with the battery percentage displayed on the screen. The 80% of full charge capacity, 100% shown on the display, and the full charge are also pointed out in the graph. The dark line shows the varying levels of charging power during the charging, with the peak charging power being reached just under 42W in the first few minutes of the charging. The peak charging power heats the battery very fast, so it’s reasonable that the peak charging power only lasts for a few minutes. But the thing to keep in mind is that the battery keeps charging but at a progressively reduced charging power. In the graph below, we still see a few peaks, but they are between the 30W and 40W levels.

Each manufacturer decides at which point to display a battery charged at 100%, which indicates that the battery is nearing a full charge.

In the following chart, we see just how two superchargers work. The maximum supercharge of 150W was nearly reached and  240W was achieved but only once during the charge duration, and only at the very beginning of the charge. This shows that even the fast chargers usually only peak at the advertised speed for a moment before charging at lower speeds to protect the device and battery.

 

DXOMARK  provides this detailed graph in every smartphone battery test result.

Charger compatibility

Earlier this year we tested the cross-compatibility of chargers between various brands and published our results in an article. In summary,  our findings showed that a proprietary fast charger of 240W could achieve that level of charging with the smartphone it was specifically made to work with (if only for a brief moment as seen earlier). But if used on another brand, the charging power might only reach up to 45W with the USB-PD protocol, as both devices revert to the protocol that is best compatible between them.

Testing the iPhone 15 Pro Max

Since Apple recently introduced the USB Type-C port with the iPhone 15 series, we were eager to test non-iPhone chargers and cables with the iPhone 15 and iPhone 15 Pro Max. We ran tests to verify the compatibility between the new iPhone 15 series with multiple chargers and cables from different brands, and third-party chargers including original iPhone ones.

Our results showed that the latest iPhone 15 series was compatible with most third-party chargers and other brands, with no significant difference in charging power. The iPhone 15 Pro Max drew around 28W to 30W maximum during charging, while the iPhone 15 drew around 22W.

For example, in one test, we charged the iPhone 15 Pro Max with a 30W iPad adapter, using USB-C cables from other phone brands. Our results showed that the charging power was constant at 27.6W.

We saw some slight variation with the same cable (from Apple) but chargers from different brands or third parties, as seen in the following graph. What stood out from these results was that the iPhone 15 Pro Max reached a peak charging power of 29.4W with a 45W third-party charger, a bit higher than the 27.6W charging power reached when using the Apple brand cable and charger combination.

It’s also interesting to note that a superfast 160W charger did not yield higher readings than the 45W charger. But we noticed that the iPhone 15 series achieved a peak charging power with certain Android chargers supporting USB PD 3.0 that was slightly higher than with an original iPhone charger.

The iPhone 15 Pro Max’s charging performance with Apple brand as well as off-brand chargers.
The iPhone 15’s charging performance with Apple brand as well as off-brand chargers.

We also tested the iPhone 15 Pro Max charging compatibility with third-party cables and chargers from the same brand. The iPhone 15 Pro Max was able to charge with most brands.

This illustrates the complexities of the overall charging process. All the components of charging — the adapter, the cable and the phone — have to recognize each other to work together if the charge is to achieve its highest power possible.

Conclusion

As you can see, there’s much to consider when choosing the right charger and cable for your smartphone. In the case of the iPhone 15 Pro Max, the device did not surpass the 30W with any charger, even one that delivered a supercharge. The safest bet is to stick with the smartphone manufacturer’s charger and cable, but that doesn’t mean that third-party should be entirely dismissed. As we also saw with the iPhone 15 Pro Max, some third-party chargers delivered a little bit more charging power to the device than Apple’s charger.

The move to standardize USB-C PD is a big step in the right direction, even if all chargers or cables won’t supply the same amount of charging power to different smartphones. As our tests showed, even proprietary super-fast chargers only peak at their highest charging power briefly, even for their own devices; this is to be expected.  Nevertheless, the requirement to support a common and universal protocol will ensure that regardless of which phone you have or which cables and chargers you buy, you will be able to safely charge your phone.

We hope that this article has given you a better understanding of the complexities involved in smartphone battery charging. Watch the video on cross charging:

Be sure to check out more content in our Decodes series, where we try to explain concepts or dispel myths related to technology, particularly in smartphones and other consumer electronics.

The post DXOMARK Decodes: A brief look at smartphone charging and compatibility appeared first on DXOMARK.

]]>
https://www.dxomark.com/dxomark-decodes-a-brief-look-at-smartphone-charging-and-compatibility/feed/ 0 Power consumption and battery level 1 Power consumption and battery level 2_Supercharger Power consumption and battery level 2_Supercharger 2 iPhone15ProMax DECODES diff chargers FINAL iPhone 15 DECODES different chargers FINAL