LICASO and DAYSIM

Ian Ashdown1, Chris Jackson2, Joel Spahn3, Todd Saemisch3

1. SunTracker Technologies Ltd., Victoria, Canada. 2. Lighting Analysts Ltd., London, UK. 3. Lighting Analysts Inc., Littleton, CO.

LICASO™ and DAYSIM are daylighting analysis software programs that perform climate-based annual daylight simulations, including the calculation of annual daylight metrics. This white paper compares their performance in terms of accuracy and calculation times. Average illuminance and spatial daylight autonomy values agree to within 8 percent, while LICASO is approximately 250 times faster than DAYSIM for a benchmark model.

1.  Climate-Based Daylight Metrics

Quoting from the IES Lighting Handbook, Tenth Edition,5 the definition of daylight autonomy seems simple enough: “… the measure of the percentage of the operating period (or number of hours) that a particular daylight level is exceeded throughout the year.” It includes spatial daylight autonomy (sDA)6, which “… reports the percentage of sensors (or building area) that achieves a minimum daylight illuminance level (typically 300 lux) for a minimum percent of the analysis year (time).” Other dynamic daylight metrics include continuous daylight autonomy (cDA)6, maximum daylight autonomy (mDA)6, useful daylight illuminance (UDI)9 and more, all with seemingly simple definitions.

Simple to say, yes, but calculating these metrics is another matter entirely. Behind the scene are devilishly complex algorithms that require massive amounts of computation. Until recently, the only options for professional lighting designers and architects have been based on the justly acclaimed Radiance suite of lighting simulation tools. A typical example is DAYSIM, which is described as “… validated, Radiance-based daylighting analysis software that models the annual amount of daylight in and around buildings.”

LICASO is the first climate-based annual daylight simulation software program that is not based on Radiance. More than this, it is not even based on the Radiance computational model of ray tracing. Rather, it relies on proven radiosity methods,1 and in particular the algorithms that have been driving Lighting Analysts’ AGi32 and ElumTools lighting design and analysis software products for nearly two decades. (See Lighting Analysts’ blog article Climate-Based Daylight Modeling for further details.)

… but enough gratuitous advertising. The intent of this white paper is to compare the performance of LICASO and DAYSIM in terms of accuracy and calculation times. This is not a case of which program is better suited for any given application, but simply to see whether the two programs are indeed comparable.

2.  Benchmark Model

The spatial daylight autonomy metric has been adopted for use with green building certification by the US Green Building Council10 and the International WELL Building Institute.3 However, the United Kingdom Education Funding Agency mandates the use of both spatial daylight autonomy and useful daylight illuminance daylight metrics for its Priority School Building Programme.4 The benchmark model used in this white paper is therefore based on a typical 55-m2 classroom in accordance with the PSBP baseline design.

The benchmark model consists of four identical rooms facing north, south, east, and west, with each room having two glazed windows (FIG. 1). Each room measures 7.5 meters long by 7.0 meters wide by 3.2 meters high.

Each window measures 1.68 meters wide by 2.0 meters high, is positioned 0.85 meters above the floor and 0.5 meters from the closest wall, and has a transmittance of 70 percent.

The floor reflectance is 20 percent, the wall reflectance is 60 percent, and the ceiling reflectance is 80 percent.

A virtual ground plane with 18 percent reflectance is assumed for LICASO. The equivalent for Radiance (and hence DAYSIM) is a 180-degree glow source (essentially an upside-down sky) with uniform luminance that is the horizontal illuminance due to the diffuse skylight and direct sunlight multiplied by the ground plane reflectance.8 DAYSIM allows the user to specify the ground plane reflectance, with a default value of 20 percent.

Figure 1 - Benchmark Model

Figure 1 – Benchmark Model

A grid of 15 by 15 virtual photometers, spaced at 0.5-meter intervals, is centered in each room, with a mounting height of 0.75 meters (FIG. 2).

Figure 2 - Virtual photometer layout

Figure 2 – Virtual photometer layout

3.  Simulation Parameters

The TMY3 weather file is LONDON/GATWICK-GBR (37760), with a site location of 51.15 degrees north and 0.18 degrees west. All simulations were run in one-hour increments for the entire year, with occupied hours of 8:00 AM to 4:00 PM for a total of 2,920 hours, and Daylight Saving Time from March 29th to October 25th.

3.1    DAYSIM

DAYSIM uses a modified version of the Radiance utility program rtrace called rtrace_dc., where the “dc” suffix represents “daylight coefficients.” Of the 46 user-specified parameters available for rtrace, rtrace_dc provides user access to 13 of them with default values (Table 1).

Table 1 – DAYSIM User-Specified Parameters

Parameter Name Default Value
-aa acc ambient accuracy 0.10
-ab N ambient bounces 5
-ad N ambient divisions 1000
-ar res ambient resolution 300
-as N ambient super-samples 20
-dj frac source jitter 0.0
-dp D direct pretest density 512
-dr N direct relays 2
-ds frac source sub-structuring 0.20
-lr N limit reflection 6
-lw frac limit weight 0.004
-sj frac specular jitter 1.0
-st frac specular threshold 0.15

It is not obvious to DAYSIM users what effect these parameters will have on the calculations. However, the same parameters are available for the Radiance utility program rpict, and so reference can be made to http://radsite.lbl.gov/radiance/refer/Notes/rpict_options.html, with rendering artifacts related to the relevant parameters enumerated in Table 2.

Table 2 – Artifacts Associated with DAYSIM Parameters

Parameter Artifact Solution
-aa uneven shading boundaries in shadows decrease value by 25%
-ab lighting in shadows too flat increment value
-ad “splotches” of light double value
-ar shading wrong in some areas double or quadruple value
-as “splotches” of light increase to half of -ad setting
-dj shadows are unnaturally sharp increase value to 0.7
-dp incorrect mirror reflections double value
-dr missing multiple mirror reflections increment value
-ds large sources cast unnatural shadows decrease value by 50%
-lr some multiple specular reflections gone increment value
-lw some specular reflections gone decrease value by 50%

Some of these parameters are problematic, of course, in that their effects can only be seen in the renderings generated by rpict. Without access to these renderings, DAYSIM users have little choice but to accept its default values.

More obvious are the effects of the parameter values on the calculation times. These are (again from the rpict documentation) enumerated in Table 3.

Table 3 – Calculation Times Associated with DAYSIM Parameters

Parameter Execution Time Effect
-aa direct, doubling this value approximately quadruples rendering time
-ab direct, doubling this value can double rendering time
-ad direct, doubling value may double rendering time
-ar direct, effect depends on scene, can quadruple time for double value
-as direct, effectively adds to -ad parameter and its cost
-dj indirect, increasing value requires -ps parameter to be reduced
-dp minor, affects start-up time only, higher values take longer
-dr direct, depending on the scene each new reflection can double time
-ds inverse, halving value causes rendering time to approximately double
-lr minor, increase causes very slightly longer rendering time
-lw minor, decrease causes very slightly longer rendering time

 (The -ps parameter of rtrace is not accessible to the user, so presumably rtrace_dc modifies this parameter accordingly when the -dj parameter is changed from its default value.)

What is clear is that the -a* parameters can be very expensive in terms of calculation time, and should therefore be changed with considerable caution. In the absence of rpict renderings, however, the only indication of the effect of these parameters is on the uniformity of the virtual photometer readings.

To illustrate this point, FIG. 3 shows the isolux distribution of photometer readings for two sets of -a* parameter values, with all other parameters being set to their DAYSIM default values.

Figure 3 – Visualization of DAYSIM –a* parameter values effect on photometer uniformity

Figure 3 – Visualization of DAYSIM –a* parameter values effect on photometer uniformity

Simply by looking at the isolux distributions, it is evident that setting -ad to 1000 and -as to 20 results in “splotches” of light. Raising these parameter values to 5,000 and 500, respectively, appears to resolve this issue, but at the expense of increasing the calculation time. (In this particular example, the DAYSIM calculation time increased by a factor of 6.5 times.)

It should also be noted that the optimal -a* parameter values are scene-dependent. Mastery of these parameters requires an in-depth understanding of how Radiance interpolates its cached irradiance values.

DAYSIM further offers three options for daylight coefficients:

  • Original with 65 representative direct solar positions (e.g., FIG. 4)
  • DDS (Dynamic Daylight Simulations) with 2,305 representative direct solar positions
  • Shadow testing with hourly direct solar positions taken directly from TMY3 weather file records

In the third option, the actual solar position is bi-linearly interpolated from the representative direct solar positions for the first two options. (The third option is reportedly rarely used because it is very expensive in terms of calculation time.)

FIG. 4 –Annual solar path (65 positions) for Freiberg, Germany. (Ref. 2)

FIG. 4 –Annual solar path (65 positions) for Freiberg, Germany. (Ref. 2)

3.2    LICASO

In accordance with radiosity methods, LICASO subdivides each surface into a two-level hierarchy of patches and elements (Fig. 5). For each interreflection, light is received by the elements of a patch and then reflected (and transmitted for translucent surfaces) from the center of the patch.1 The combined direct sunlight and diffuse daylight are interreflected (or “bounced”) between elements and patches in this way until mostly absorbed.

Compared to DAYSIM, LICASO has only four user-specified parameters for climate-based annual daylight modeling:

  • Maximum surface patch area
  • Maximum window patch area
  • Number of elements per surface patch
  • Stopping criterion for absorbed light

For the benchmark model, the maximum patch area is 1.0 m2, the number of elements per patch is four, and the stopping criterion is 99 percent. (That is, the bounces of light stop when 99 percent of the interreflected light is absorbed.)

Direct sunlight and diffuse skylight incident upon the windows are received by each window patch and transmitted into the room interior from the center of each patch. For the benchmark model, each window patch has an area of approximately 0.3 m2.

Figure 5 – LICASO surface discretization into patches (blue lines) and elements (red lines).

Figure 5 – LICASO surface discretization into patches (blue lines) and elements (red lines).

LICASO defines 120 representative direct solar positions, as shown in FIG. 6, where the representative positions are calculated for each hour on the specified dates. The actual solar position for any given hour and date is then linearly interpolated from the representative direct solar positions for the same hour. (See Lighting Analysts’ blog article Climate-Based Daylight Modeling for further details.)

Figure 6 – LICASO representative direct solar positions.

Figure 6 – LICASO representative direct solar positions.

LICASO generates a variety of daylight metrics:

  • Illuminance
  • Basic Daylight Autonomy (DA)
  • Continuous Daylight Autonomy (cDA)
  • Maximum Daylight Autonomy (mDA / maxDA)
  • Minimum Daylight Autonomy (minDA)
  • Spatial Daylight Autonomy (sDA)
  • Useful Daylight Illuminance (UDI)
  • Annual Daylight Exposure (ADE)
  • Annual Sunlight Exposure (ASE)
  • Spatial Annual Sunlight Exposure (sASE)

It also provides three-dimensional rendered views (e.g., FIG. 7) and animations of single days and the entire year. This enables the user to both analyze and visualize the distribution of daylight throughout the year.

Figure 7 – LICASO Daylight Autonomy.

Figure 7 – LICASO Daylight Autonomy.

4.  Results

All tests were performed on a Windows 10 desktop computer with an Intel Core i7-4770K quad core CPU (3.5 GHz overclocked at 4.1 GHz) and 32 GB of random access memory

As previously noted, both the accuracy and execution time of DAYSIM is strongly dependent on the user-specified parameter values, as is evident from FIG. 3. Consequently, eight separate simulations were performed with different parameter settings for -ab, -ad, -ar, and -as, with each simulation being compared with the LICASO results. (Default values were used for all other DAYSIM parameters.)

Two metrics were chosen for comparison purposes: average illuminance as measured by the virtual photometers, and spatial daylight autonomy for 300 lux and 50 percent minimum time (designated as sDA300/50% by IES LM-83-12).6 The benchmark results are presented in Appendix A.

The DAYSIM versus LICASO average illuminance differences are plotted in Figure 8. Assuming that the DAYSIM simulations represent increasing accuracy with each simulation, it is evident that LICASO underestimates the average illuminance of the south room by 5 percent and the west room by 3 percent, and overestimates the average illuminance of the north room by 2 percent and the east room by 1 percent. Considering that DAYSIM and LICASO use completely different computational models, these differences are remarkably small.

Figure 8 - DAYSIM versus LICASO average illuminance differences

Figure 8 – DAYSIM versus LICASO average illuminance differences

The DAYSIM versus LICASO sDA300/50% differences are plotted in Figure 9. Again assuming that the DAYSIM simulations represent increasing accuracy with each simulation, it is evident that LICASO underestimates the sDA of the west room by 8 percent, the north room by 4 percent, and the east room by 8 percent.

Figure 9 - DAYSIM versus LICASO sDA300/50% differences

Figure 9 – DAYSIM versus LICASO sDA300/50% differences

The DAYSIM sDA value for the east room appears to be a calculation anomaly, possibly due to a remaining “splotch” of light. Ray tracing in Radiance is a stochastic (i.e., random) process, and so this anomaly may not occur if the benchmark is executed on a different machine.

The execution times for the different simulations are summarized in Table 13.

Table 13 – DAYSIM / LICASO execution times

Simulation DAYSIM Execution Time

(minutes)

LICASO Comparison

(45 seconds)

1   28   37 times
2   69   92 times
3 103 137 times
4 143 190 times
5 176 234 times
6 182 242 times
7 221 294 times
8 261 348 times

The differences in execution time between LICASO and DAYSIM are perhaps surprising, but they are typical due to differences between the ray tracing and radiosity calculation models. Simply put, radiosity methods are better able to take advantage of scene redundancy between hourly calculations.

5.  Conclusions

For both the average illuminance and spatial daylight autonomy metrics used in this benchmark comparison, it must be implicitly assumed that DAYSIM generates correct values. DAYSIM is described as “validated, Radiance-based daylighting analysis software,” but the simulations show that the two metrics converge to constant values only for Simulations 6 through 8. It is true that Radiance has been validated by a number of studies, but the accuracy of its photometric predictions is highly dependent on the parameters chosen for rtrace and, by extension, rtrace_dc.

Simulation 1 generates average illuminance results that differ by up to 6 percent from the converged values of Simulations 6 through 8. Similarly, the sDA results differ by up to 13 percent. Given this, the differences in results between DAYSIM and LICASO for Simulations 6 through 8 are arguably acceptable. (As an aside, differences of ±10 percent between predicted and measured illuminances are considered quite acceptable in electric lighting calculations.)

Regarding the difference in calculation times – LICASO is hundreds of times faster than DAYSIM – this must be put into perspective. For the past three decades, Radiance has been the gold standard for electric lighting and daylighting research, and DAYSIM has built upon this foundation by offering lighting researchers open-source software for dynamic daylight metrics, annual visual glare analysis, and electric lighting control. The innumerable user-specified parameters of rtrace and other Radiance tools (including DAYSIM) may make them difficult to master, but they are essential for lighting research.

Lighting Analysts’ LICASO, by comparison, is a commercial product that is powered by proprietary software licensed from SunTracker Technologies, and which relies on patented and patent-pending algorithms. It is intended for use as a climate-based daylighting simulation and analysis tool for professional lighting designers and architects. It further does not support glare analysis, electric lighting control, or bidirectional scattering-distribution functions (BSDFs), although these features are currently under development.

In summary, then, this benchmark analysis has shown that DAYSIM and LICASO generate comparable results in terms of dynamic daylight metrics such as spatial daylight autonomy. LICASO is clearly faster, but this comes at a cost for daylighting research, as there are fewer parameters to experiment with. Which software to choose depends, as always, on the user’s requirements.

References

  1. Ashdown, I. 1994. Radiosity: A Programmer’s Perspective. New York, NY: John Wiley & Sons.
  2. Bourgeois, D., C. F. Reinhart, and G. Ward. 2008. “Standard Daylight Coefficient Model for Dynamic Daylighting Simulations,” Building Research & Information 36(1):68-82.
  3. Delos Living. 2017. The WELL Building Standard v1 with January 2017 addenda. New York, NY: Delos Living LLC.
  4. 2014. EFA Daylight Design Guide, Lighting Strategy, Version 2. London, UK: The National Archives.
  5. 2010. The Lighting Handbook, Tenth Edition. New York, NY: Illuminating Engineering Society.
  6. 2013. IES RP-5-13, Recommended Practice for Daylighting Buildings. New York, NY: Illuminating Engineering Society.
  7. 2012. LM-83-12, IES Spatial Daylight Autonomy (sDA) and Annual Sunlight Exposure (ASE). New York, NY: Illuminating Engineering Society.
  8. Mardaljevic, J. 1998. “Daylight Simulation,” Chapter 6, in Rendering with Radiance, G. W. Larson and R. Shakespeare, Eds. San Francisco, CA: Morgan Kaufmann Publishers.
  9. Nabil, A., and J. Mardaljevic. 2006. “Useful Daylight Illuminances: A Replacement for Daylight Factors,” Energy and Buildings 38(7):905-913.
  10. 2013. LEED v4 BD+C: Schools – Daylight. Washington, DC: U.S. Green Building Council (www.usgbc.org).

Appendix A – Benchmark Results

Table A1 – DAYSIM  / LICASO Simulation 1

Time
DAYSIM 28 minutes
LICASO 45 seconds

 

Parameter Value
-ab 5
-ad 1000
-ar 300
-as 20

 

Room Average Illuminance Difference sDA300/50% Difference
DAYSIM LICASO DAYSIM LICASO
West 987.6 983 -0.5% 70.9% 71.0% +0.1%
South 2060.3 1991 -3.3% 100.0% 100.0% 0.0%
North 612.2 648 +5.8% 62.1% 65.0% +2.9%
East 1219.1 1253 +2.7% 82.4% 87.0% +4.5%

Table A2 – DAYSIM  / LICASO Simulation 2

Time
DAYSIM 69 minutes
LICASO 45 seconds

 

Parameter Value
-ab         6
-ad 2000
-ar    300
-as    200

 

Room Average Illuminance Difference sDA300/50% Difference
DAYSIM LICASO DAYSIM LICASO
West 1002.0    983 -1.8%   73.1%   71.0% -2.8%
South 2081.6 1991 -4.3% 100.0% 100.0%   0.0%
North    625.2    648 +3.6%   64.8%   65.0% +0.3%
East 1236.0 1253 +1.3%   93.4%   87.0% -6.8%

Table A3 – DAYSIM  / LICASO Simulation 3

Time
DAYSIM 103 minutes
LICASO 45 seconds

 

Parameter Value
-ab         6
-ad 3000
-ar    300
-as    300

 

Room Average Illuminance Difference sDA300/50% Difference
DAYSIM LICASO DAYSIM LICASO
West 1009.6    983 -2.6%   75.8%   71.0% -6.3%
South 2085.7 1991 -4.5% 100.0% 100.0%   0.0%
North    632.6    648 +2.4%   67.0%   65.0% -2.9%
East 1240.3 1253 +1.0%   92.3%   87.0% -5.7%

Table A4 – DAYSIM  / LICASO Simulation 4

Time
DAYSIM 143 minutes
LICASO 45 seconds

 

Parameter Value
-ab         6
-ad 4000
-ar    300
-as    400

 

Room Average Illuminance Difference sDA300/50% Difference
DAYSIM LICASO DAYSIM LICASO
West 1011.4    983 -2.8%   75.3%   71.0% -5.7%
South 2092.0 1991 -4.8% 100.0% 100.0%   0.0%
North    631.3    648 +2.6%   65.9%   65.0% -1.3%
East 1243.2 1253 +0.7%   95.1%   87.0% -8.5%

Table A5 – DAYSIM  / LICASO Simulation 5

Time
DAYSIM 176 minutes
LICASO 45 seconds

 

Parameter Value
-ab 6
-ad 5000
-ar 300
-as 500

 

Room Average Illuminance Difference sDA300/50% Difference
DAYSIM LICASO DAYSIM LICASO
West 1015.8    983 -3.2%   76.4%   71.0% -7.0%
South 2093.9 1991 -4.9% 100.0% 100.0%   0.0%
North    636.4    648 +1.8%   65.9%   65.0% -1.3%
East 1243.0 1253 +0.8%   95.1%   87.0% -8.5%

Table A6 – DAYSIM  / LICASO Simulation 6

Time
DAYSIM 182 minutes
LICASO 45 seconds

 

Parameter Value
-ab         7
-ad 5000
-ar   300
-as   500

 

Room Average Illuminance Difference sDA300/50% Difference
DAYSIM LICASO DAYSIM LICASO
West 1013.5    983 -3.0%   76.4%   71.0% -7.0%
South 2097.4 1991 -5.0% 100.0% 100.0%   0.0%
North    634.4    648 +2.1%   68.1%   65.0% -4.5%
East 1245.8 1253 +0.5%   95.1%   87.0% -8.5%

Table A7 – DAYSIM  / LICASO Simulation 7

Time
DAYSIM 221 minutes
LICASO 45 seconds

 

Parameter Value
-ab        7
-ad 6000
-ar    300
-as    600

 

Room Average Illuminance Difference sDA300/50% Difference
DAYSIM LICASO DAYSIM LICASO
West 1025.4    983 -4.1%   76.9%   71.0% -7.6%
South 2096.2 1991 -5.0% 100.0% 100.0%   0.0%
North    633.3    648 2.3%   68.1%   65.0% -4.5%
East 1249.3 1253 +0.2% 100.0%   87.0% -13.0%

NOTE: The DAYSIM sDA value for the East room appears to be an anomaly.

Table A8 – DAYSIM  / LICASO Simulation 8

Time
DAYSIM 261 minutes
LICASO 45 seconds

 

Parameter Value
-ab         7
-ad 7000
-ar    300
-as    700

 

Room Average Illuminance Difference sDA300/50% Difference
DAYSIM LICASO DAYSIM LICASO
West 1016.1 983 -3.2% 77.5% 71.0% -8.3%
South 2095.9 1991 -5.0% 100.0% 100.0% 0.0%
North 634.2 648 +2.1% 68.1% 65.0% -4.5%
East 1242.6 1253 +0.8% 94.5% 87.0% -7.9%

CIE 171:2006 – Errata

Getting It Right

Ian Ashdown, P. Eng., FIES

Senior Scientist, Lighting Analysts Inc.

[Please send all comments to allthingslighting@gmail.com]

UPDATE 16/07/08 – Added Optis SPEOS to list of validated (not “certified”) software products and Test Case 5.11 analysis.

In 2006, the Commission International de l’Eclairage (CIE) published CIE 171:2006, Test Cases to Assess the Accuracy of Lighting Computer Programs. To quote from the summary:

The objective of this report is to help lighting program users and developers assess the accuracy of lighting computer programs and to identify their weaknesses. A validation approach is therefore presented based on the concept of separately testing the different aspects of light propagation. To apply this approach, a suite of test cases has been designed where each test case highlights a given aspect of the lighting simulation domain and is associated with the related reference data.

Two types of reference data are used: data based on analytical calculation and data based on experimental measurements. The first is associated with theoretical scenarios that avoid uncertainties in the reference values. The second type is obtained through experimental measurements, where the scenario and the protocol are defined in a manner that minimizes the uncertainties associated with the measurements.

As one of the 24 members of CIE Technical 3-33 that wrote the report (mostly as a technical editor and reviewer), and also as a member of the IES Computer Committee that spent a decade attempting to write a similar document, I can attest that it was a monumental task. It is therefore understandable that there were at least a few errors in the final report.

The first and most important of these errors became apparent a year later when one of the first validations of a commercial lighting design and analysis program was conducted (Dau 2007). Further errors became evident during the preparation of a graduate thesis (Osborne 2012), and more during a recent validation study (Dau 2016).

Understanding and documenting these errors is important. To date, there have been at least fifteen lighting design and analysis programs that have been validated against some or all of the CIE 171:2006 test cases, including:

Manufacturer Program Reference
Autodesk 3ds Max Design Osborne 2012
  APOLUX / LightTools Carvalho 2009

Cunha 2011

Moraes 2013

Pereira 2008

DIAL GmbH DIALux / DIAL Evo Mangkuto 2016
EDSL Tas Daylight EDSL 2015
LBNL Radiance Donn et al. 2007

Geisler-Moroder and Dur 2008

Osborne 2012

Lighting Analysts AGi32 / ElumTools Dau 2007
Lightscape Technologies Lightscape Maamari 2006
Mental Images mental ray Labayrade and Fontyonont 2009
nVidia iRay Dau 2016
Optis SPEOS Labayrade and Sorèze 2014
Relux Relux Maamari 2006
Velux Daylight Visualizer Labayrade et al. 2009

Labayrade et al. 2010

NOTE: Some companies have stated that their products have been “certified” by the Ecole Nationale des Travaux Publics de l’Etat (ENTPE). The CIE Central Bureau has confirmed (Paul 2013) that it has requested these companies not to use the phrases “certified by” or “certified against” CIE 171:2006. (The correct terminology is “validated.”)

Unfortunately, the CIE has yet to publish errata for CIE 171:2006, and there are currently no plans to do so.

As a service to the lighting industry, then, the following is a complete list of known errors in CIE 171:2006. Hopefully, this information will ease the pain and suffering of anyone undertaking the work of validating a lighting design and analysis program against this document.

1. Test Case 5.7

The objective of Test Case 5.7, “Diffuse reflections with internal obstructions,” is to “verify the capability of a program to simulate the influence of an obstruction to diffuse illumination.”

The derivation of Table 19 is not explained in CIE 171, but it was presumably determined using form factor analysis. The following independent analysis indicates that the values presented in Table 19 are incorrect.

1.1  Analytical Reference

As noted in Section 5.7.3, “To enable comparison between the simulation results and the analytical reference independently from the illuminance value over S2 or from its surface reflectance, the reference values are presented under the form of  E / Ev· ρ (see Table 19). This is equal to the configuration factor between the measurement point and the unobstructed portion of S2.

1.2  Table 19 Analysis

To validate the values presented in Table 19, it is necessary to calculate the configuration factors between the measurement point and the unobstructed portion of S2. For the horizontal surface S1‑hz measurements, the geometric relationships are:

CIE 171 Errata - FIG. 1

and the configuration factor C is given by:

CIE 171 Errata - EQN. 1.1                                           (1.1)

We can then use form factor algebra to determine:

CIE 171 Errata - FIG.2

and the configuration factor C is given by:

CIE 171 Errata - EQN. 1.2                                                     (1.2)

For the vertical surface S1‑v measurements, the geometric relationships are:

CIE 171 Errata - FIG.3

where:

CIE 171 Errata - EQN. 1.3                              (1.3)

We can again use form factor algebra to determine:

CIE 171 Errata - FIG.4

where:

CIE 171 Errata - EQN. 1.4                                                         (1.4)

for measurement points A through D, and:

CIE 171 Errata - FIG.5

where:

CIE 171 Errata - EQN. 1.5                                                   (1.5)

for measurement points E and F.

Table 19 then becomes:

CIE 171 Errata - Table 19

1.3  Conclusion

Table 19 of CIE 171:2006 is incorrect, likely because incorrect geometry was used for the calculations.

1.4  Worksheet

Horizontal Points (G – K)

b = 2.0

CIE 171 Errata - Worksheet 1

Vertical Points (A – D)

b = 2.0

c = 4.0

Y = 0.5

sx = sqrt(1 + X2)

sy = sqrt(1 + Y2) = 1.1180

CIE 171 Errata - Worksheet 2

Vertical Points (E – F)

b = 2.0

c = 4.0

Y = 0.5

sx = sqrt(1 + X2)

sy = sqrt(1 + Y2) = 1.1180

CIE 171 Errata - Worksheet 3

2.  Test Case 5.8

The objective of Test Case 5.8, “Internal reflected component calculation for diffuse surfaces,” is to “assess the accuracy of the diffuse inter-reflections inside a room.”

The approach consists of analytically calculating the indirect illuminance of a closed sphere by an isotopic point light source and using this as the “approximate average indirect illuminance” of a square room.

2.1  General Approach Commentary

To quote from CIE 171:2006:

The test case geometry is a square room of dimensions 4 m x 4 m x 4 m (ST = 96 m2), with all surfaces being uniform diffusers and spectrally neutral. An isotropic point light source is positioned at the centre of the room with an output flux (f) of 10000 lm.

The reflectance  is the same for all interior surfaces and varies from 0% to 95%.

It is understandable that CIE 171:2006 specifies a square room rather than a tessellated sphere, as some older programs (such as Lighting Technologies’ Lumen Micro) are incapable of supporting arbitrarily oriented surface elements. However, this presents a problem in that the interreflections between surface elements at the room corners result in significantly lower illuminances for these elements than for those elements in the middle of the room surfaces. This problem is exacerbated by low surface reflectances[1]. (See Figure 1 for an example.)

This problem is compounded by the choice of surface discretization. A coarse mesh will tend to smooth the illuminance distribution, but it will also mask errors. Without specifying a mesh resolution or how to average the results, it is difficult to compare the results from different lighting design programs that use the radiosity method. It is even more difficult to compare results from ray-tracing programs, as the results depend on the number of stochastically traced rays.

CIE 171 Errata - FIG.6

Figure 1.  Example room with 10% surface reflectance. Illuminance values range from 48 cd/m2 in room corners to 209 cd/m2 in center of room surfaces. (Table 1 predicts 115 cd/m2.)

2.2  Analytical Solution Commentary

To quote again from CIE 171:2006:

Analytically, in the case of a closed sphere with diffuse internal surfaces, the indirect flux incident upon an internal point of the sphere is given by the equation:

CIE 171 Errata - CIE EQN. 14                                                                   (14)

where:

Φ = direct luminous flux entering the sphere.

The indirect illuminance at any internal point of the sphere is given by the equation:                                                                                                                                                  CIE 171 Errata - CIE EQN. 15                                                                                                   (15)

where:

E  = indirect illuminance (lx);

ST = sphere internal surface (m2);

ρ   = sphere internal surface reflectance;

Φ  = direct luminous flux entering the sphere (lm)

The problem with this approach is that most lighting design programs do not separately report direct and indirect illuminance. It is therefore necessary to relate total illuminance to its indirect component for the special case of an integrating sphere.

The luminance L at any internal point of the sphere due to indirect and direct illuminance is:

CIE 171 Errata - EQN. 2.1                                                                                                (2.1)

Given that the sphere surface is an ideal diffuse reflector, the luminous exitance M at any point is:

CIE 171 Errata - EQN. 2.2                                                                                     (2.2)

and so the illuminance E at any point is:

CIE 171 Errata - EQN. 2.3                                                                                       (2.3)

The direct illuminance  at any point is:

CIE 171 Errata - EQN. 2.4                                                                                                            (2.4)

and so its indirect illuminance  is:

CIE 171 Errata - EQN. 2.5                                              (2.5)

and so:

CIE 171 Errata - EQN. 2.6                                                                                                                 (2.6)

Dividing each entry of Table 20 by  gives:

CIE 171 Errata - Table 20

Table 1. Illuminance variation with reflectance.

2.3  Conclusions

In view of the above, it is recommended that:

  1. The test case geometry be amended to consist of a sphere rather than a square room; and
  2. The test case analytical reference be amended to specify total illuminance rather than indirect illuminance.

3.  Test Case 5.9

The objective of Test Case 5.9, “Sky component under a roof unglazed opening and the CIE general sky types,” is to “test the capability of a lighting program to calculate the sky component under different conditions, in particular those standardized by the CIE general sky.”

3.1  Correction

With respect to Section 5.9.3, the entries for CIE General Sky Type 3 in Table B1 are transposed; the correct values are:

CIE 171 Errata - Table B1

4.  Test Case 5.10

The objective of Test Case 5.10, “Sky component under a roof glazed opening,” is to “verify the capability of a lighting program to simulate the influence of glass with a given directional transmittance under different type of CIE general skies.”

4.1  Analysis

The glazing consists of 6mm clear glass, whose average transmittance is not specified. However, an equation for glass transmittance is given by Mitalas and Arseneault (1968):

CIE 171 Errata - EQN. 4.1   (4.1)

where q is the incidence angle. By setting q  to zero, the calculated transmittance T is 0.878.

5.  Test Case 5.11

The objective of Test Case 5.11, “Sky Component and external reflected component for as façade unglazed opening” is to “verify the capability of a program to correctly calculate the contribution of the external ground and the sky luminance distribution to the internal illuminance of a room with a faced opening.”

Unfortunately, this test case is flawed in that “the external ground illuminance is assumed to be uniform.” This assumption fails to take into consideration shadowing of the ground by the building. Quoting further, “the direct sun illuminance is not taken into consideration,” so the shadowing will depend on the CIE sky type. (Whether this assumption significantly affects the published results in Tables B.9, B.10, and B.11 is unknown.)

6.  Test Case 5.12

CIE 171:2006 does not state the transmittance of 6mm clear glass. However, following Test Case 5.10, it should be assumed to be 0.878.

7.  Test Case 5.13

The objective of Test Case 5.13, “SC+ERC for an unglazed façade opening with a continuous external horizontal mask” is to “verify the capability of a lighting program to simulate the influence of an external horizontal mask on the internal direct illuminance.”

Unfortunately, this test case is fundamentally flawed for three reasons:

  1. it assumes an external horizontal mask of uniform luminance Lob, which is derived in Section 5.13.1.1 from the external horizontal ground illuminance. It does not, however, consider the shadowing influence of the black room beneath it.
  2. It does not specify the mask surface reflectance rob.
  3. It requires the mask to have uniform luminance. However, the shadowing influence of the black room will result in a nonuniform luminance distribution.

Given these flaws, Tables B.21, B.22, and B.23 should not be used.

This test case should be restated such that the mask surface reflectance rob is zero, in which case the tables will have to be recalculated.

8.  Test Case 5.14

The objective of Test Case 5.14, “SC+ERC for an unglazed façade opening with a continuous external vertical mask,” is to “verify the capability of a lighting program to simulate the influence of an external vertical mask on the internal direct illuminance.”

Unfortunately, this test case is fundamentally flawed for three reasons:

  1. It assumes an external vertical mask of uniform luminance Lob, which is derived in Section 5.14.1.1 from the external horizontal ground illuminance. It does not, however, consider the shadowing influence of the black room in front of it.
  2. It does not specify the mask surface reflectance rob.
  3. It requires the mask to have uniform luminance. However, the shadowing influence of the black room will result in a nonuniform luminance distribution.

Given these flaws, Tables B.24, B.25, and B.26 should not be used.

This test case should be restated such that the mask surface reflectance rob is zero, in which case the tables will have to be recalculated.

Acknowledgements

Thanks to Dawn De Grazio (Lighting Analysts Inc.) and Wilson Dau (Dau Design and Consulting Inc.) for assistance in preparing this article.

References

Carvalho, C. R. 2009. Avaliação do programa APOLUX segundo protocolos do relatório CIE 171:2006 referentes à illuminação natural. Dissertation, Federal University of Santa Catarina, Florianopolis (in Portuguese).

Cunha, A. V. L. 2011. Avaliação do programa APOLUX segundo os protocolos de modelos de céu do relatório técnico CIE 171:2006. Dissertation, Federal University of Santa Catarina, Florianopolis (in Portuguese).

Dau, W. 2007. Validation of AGi32 Against CIE 171:2006. Dau Design and Consulting Inc.

Dau, W. 2016. Personal communication. Dau Design and Consulting Inc.

Donn, M., D. Xu, D. Harrison, and F. Maamari. 2007. “Using Simulation Software Calibration Tests as a Consumer Guide – A Feasibility Study Using Lighting Simulation Software,” Proc. Building Simulation 2007, pp. 1999-2006.

EDSL. 2015. Validation of TAS Daylight against CIE 171:2006. Environmental Design Solutions Limited.

Geisler-Moroda, D., and A. Dur. 2008. “Validation of Radiance against CIE 171:2006 and Improved Adaptive Subdivision of Circular Light Sources,” Proc. Seventh International Radiance Workshop.

Labayrade, R., and M. Fontoynont. 2009. “Use of CIE 171:2006 Test Cases to Assess the Scope of Lighting Simulation Programs,” CIE Light and Lighting.

Labayrade, R., H. W. Jensen, and C. Jensen. 2009. “Validation of Velux Daylight Visualizer 2 against CIE 171:2006 Test Cases,” Proc. Building Simulation 2009, pp. 1506-1513.

Labayrade, R., H. W. Jensen, and C. Jensen. 2010. “An Iterative Workflow to Assess the Physical Accuracy of Lighting Computer Programs,” Light and Engineering 18(2):66-70.

Labayrade, R., and T. Sorèze. 2014. Assessment of SPEOS Against CIE 171:2006 Test Cases. Ecole Nationale des Travaux Publics de l’Etat (ENTPE).

M. S. Langer. 1999. “When Shadows Become Interreflections,” International Journal of Computer Vision.  34 (2/3):193-204.

Maamari, F., M. Fontoynont, and N. Adra. 2006. “Application of the CIE Test Cases to Assess the Accuracy of Lighting Computer Programs,” Energy and Building 38(7):869-877.

Mangkuto, R. A. 2016. “Validation of DIALux 4.12 and DIALux evo 4.1 against the Analytical Test Cases of CIE 171:2006,” Leukos 12(3):139-150.

Mitalas, G. P., and J. G. Arseneault. 1968. Division of Building Research Computer Program No. 28: Fortran IV Program to Calculate Absorption and Transmission of Thermal Radiation by Single and Double Glazed Windows. Ottawa, ON: National Research Council of Canada.

Moraes, L. N., A. S. da Silva, and A. Claro. 2013. “Evaluation of the Software LightTool and APOLUX according to Protocols of Technical Report CIE 171:2006,” Proc. Building Simulation 2013, pp. 1079-1086.

Osborne, J. 2012. Building a Comprehensive Dataset for the Validation of Daylight Simulation Software, using Complex “Real Architecture.” MSc. Thesis, Victoria University of Wellington.

Paul, M. 2013. Personal communication.

Pereira, R. C. 2008. Avaliaçã de ferramentas de simulação de illuminação natural por meio de mapeamento digital del luminânacias da abóboda celeste e entorno. Thesis, Federal University of Santa Catarina, Florianopolis (in Portuguese).

[1] In general, the luminance distribution of a non-convex object is determined not only by external illumination but also by interreflections between its surfaces. This issue has been extensively studied in the field of computer vision and image understanding. See, for example, Langer (1999).