Calibrate Everything! With the LG ACB8300

The ACB8300 is a low-price calibrator (especially compared to the other options available) sold to calibrate LG monitors. It is tied to the LG software, and so can't be used for anything else.

I have a 34UM95 monitor, and an ACB8300 to calibrate it. I want to be able to calibrate other things. I have no idea what I'm doing. This should be interesting!

New Conversation

Join the discussion

Log in or create an account, and start talking!

Activity

Github

Just a quick update - I've not actually done anything, but the project's code is now on github.

It's not exactly production quality, but it's not particularly complex either.

Enjoy! If you do anything with it, let me know!

Comments

Post a Comment!

Join the discussion

Log in or create an account, and start talking!

Measuring Success

There's been a fair while since my last post. I've been busy writing the bit of the application that actually turns everything that's been done so far into something useful. I'm not going to go into much detail on that, as it was all pretty boring and straightforward - no excitement or interesting developments whatsoever.

It's a very basic Windows Forms application (eh, there was no need to get fancy!). On start-up, it looks like this:

That big square in the middle changes colour as it runs its tests. Simply sit the calibrator on top of it on the screen, and we're ready to get going!

Brightness

The first thing that needs to be set is the brightness. Under the Monitor menu item, there's an option to continually report the brightness of the screen, measured in cdm.

For this test run, I've reset my screen to its default settings, which means that it's currently putting out 180cdm. This is the point at which you'd normally mess with the screen options to change the brightness to the desired level. I'm just going to leave it at 180.

Measuring

Lets run the calibration then, and see how accurate the colours are when using the default. The application makes sure that the calibrator is in the right position, rattles through a range of greys, and records the measurements for each.

The colours measured are from pure black at RGB(0,0,0) through to white at RGB(255,255,255). A reading is taken at every 10% (or as close as can be approximated), so RGB(25,25,25), RGB(51,51,51), etc. As each of these points is an equal level of red, green, and blue, the test doesn't need to do each channel individually - it's doing them all at once. If the all the channels in the grey line up properly, then everything is fine!

Results

So, the test has completed, how did we do?

The results display shows the difference (in percent) between the measured brightness/red/green/blue values, and what they should be. So, it's not too bad overall. The colours drift a bit and it's a bit too dark in general down at the dimmer end of the spectrum. But, it's pretty decent for default settings.

The measured brightness level helps in setting the non-colour-specific adjustments, brightness, contrast, and gamma. If the measured brightness values are close to their target, then things on screen will be as bright or dark as they should be, regardless of how wrong their colour is.

Colours are much simpler. If the number is negative, the colour is not red/green/blue enough, and if it's positive then it's too red/green/blue. That's it. Though, of course, changing the colour levels can affect the brightness. There's a lot of back and forth and re-measuring to be done in setting up a screen properly.

Whilst working on the grid, I did some research on how accurate a colour needs to be so that it is indistinguishable from the correct colour. I found a somewhat reliable looking web page (which I unfortunately cannot find again - I'll keep looking) that said the smallest perceptible difference between two colours is roughly 3%. So, anything that's more than 3% out is coloured in red to bring attention to it. All these numbers need to be as close to 0.00% as possible!

Calibrated

Okay, we've got the measurements for the default monitor settings. I'm now going to throw the screen through the LG software to give it a proper calibration, and then re-measure to see what happens. If I've done everything right, and the LG software is as good as it seems, then the results from this should be solid.

That's white across the board, and nothing is more than 1.71% wrong. The LG software does a good job indeed!

I also think that this counts as proof that everything is working as it should.

Now, time to dig up my old laptop so I can go calibrate my TV without running cables across the living room...

Comments

Great project! Today I received my LG ACB8300 and started hacking around when I discovered your work. My goal is actually very similar to yours, however I would love to see the device integrated into software like ArgyllCMS. As I do not have a LG monitor to test with: Is your test application anywhere avilable? Thanks

Snaked replied:

Thanks - glad you found it useful!

The code is pretty simple. I'll clean it up a bit and stick it on github somewhere. I'll post a link when it's up.

You'll need the DLL file from the TrueColorFinder software (it's under 'LG Monitor Software' here http://www.lg.com/ae/support-product/lg-34UM95).

Snaked replied:

Didn't have much time this weekend, but it's all uploaded now! The URL is https://github.com/qcc23/acb8300-cs

Respond

Join the discussion

Log in or create an account, and start talking!

Ollie commented on Measuring Success:

Hi Snaked, Great Post (and project), have learnt a lot about colour spaces! The calibrator that I have does go through red, green. blue, individually as well as white to black, which takes ages, not sure if you can get better values for an individual colour, without being affected by the brightness of the other colours?? or if my calibrator is just wasting my time!

Snaked replied:

I'm not basing the 'greys only' thing on anything particularly scientific.

From the way a monitor works (lighting up different colour sub-pixels to produce a coloured pixel), I'd expect the red in RGB(100,0,0) to be equal to the red in RGB(100,100,100). The monitor calibration software uses greys only (with a brief trip through max red/green/blue to plot a graph it shows after calibration) which seems to back up the idea.

If you (or anyone else!) knows better, let me know!

Respond

Join the discussion

Log in or create an account, and start talking!

Post a Comment!

Join the discussion

Log in or create an account, and start talking!

Gamma

Brightness/Darkness

All tests so far have been done with RGB colours where the channels were either 0 or 255. I tried measuring a RGB(100, 100, 0) colour, and got a result of RGB(29.5, 31, 0).

The reason behind this is gamma. Some words from Wikipedia:

Human vision, under common illumination conditions (not pitch black nor blindingly bright), follows an approximate gamma or power function, with greater sensitivity to relative differences between darker tones than between lighter ones.

Gamma correction is used to extend the usable range of 8-bit colour. If 0 to 255 represented intensities in a linear manner, then a large part of the range would be dedicated to bright values which the human eye can't easily distinguish.

This page provides an excellent explanation of the reasoning behind it all, and how it all works.

De-gamma

The monitor applies a gamma curve of 2.2 to the input signal. I know this because that's what's set in the LG software when I calibrated it. It also happens to be the gamma curve value that everyone seems to be standardising on for everything.

As the measured output has had a 2.2 gamma curve applied to it, I need to apply the inverse curve to turn the calibrator's reading from the screen into the value that was sent to the monitor by the PC. This is actually very simple:

static double FromGamma(double level, double gamma)
{
    return Math.Pow(level, 1 / gamma);
}

I've let the gamma be specified rather than coding in 2.2, as it can be advantageous to calibrate a screen to a higher gamma (for a dark room) or a lower gamma (for a light room).

To validate that the calculation is correct, I worked it out using the measured values. The maths uses the reading before it is multiplied out to a 0-255 range. So, using the reading of 30 for an output of 100:

Measured = 30/255 [0.118]
Corrected = Measured ^(1/2.2) [0.378] 
Actual = Measured*255 [96.399]

So, I put 100 in to the monitor, I get 96.3 out of the monitor. Close enough!

Calibration

The expectation here is that the value read by the calibrator (post gamma correction) matches the value that was sent to the screen. If the value read from the calibrator is too low, then the screen is darker than it should be. If it's too high, then it's lighter than it should be.

On a screen being measured (eg. my TV) I should be able to bring these values close to their target by fiddling with contrast, brightness, and all the other colour-specific settings. Calibration!

Comments

Post a Comment!

Join the discussion

Log in or create an account, and start talking!

Colouring In

Illumination

Before I can go from XYZ to RGB, I need to know what illuminant I'm targeting. This is related to colour temperature, the practical effect of which (for my simplistic mind) is that higher numbers are more blue and lower numbers are more red. A lot of things I've read and the LG software itself aim for 6500K by default, which is known as D65.

As an aside, the subject of colour and how it is represented is absolutely massive. There's so much involved that I'd never even thought about.

Conversion

The internet is a wonderful place. I found this page, which lists the matrices to convert RGB to XYZ and XYZ to RGB for various illuminant and RGB working spaces.

An RGB working space, in short, defines where '255' maps to on the overall chart of visible colour for each channel. I'll be targeting SRGB, which is in fairly standard use for monitors, TV, print, etc.

The D65 SRGB matrix is this:

 3.2404542 -1.5371385 -0.4985314
-0.9692660  1.8760108  0.0415560
 0.0556434 -0.2040259  1.0572252

Which, for a non-mathematical person, readsout something like:

r = x * 3.2404542 + y * -1.5371385 + z * -0.4985314
g = x * -0.9692660 + y * 1.8760108 + z * 0.0415560
b = x * 0.0556434 + y * -0.2040259 + z * 1.0572252

From the Wikipedia page, the D65 illuminant specifies the white point as X=95.047, Y=100.00, Z=108.883. These are the maximum values for X, Y and Z. The matrix takes an input of 0.0 to 1.0 for each of X, Y, and Z, and turns them into 0.0 to 1.0 for R, G, and B. Dividing the reading value by its maximum would convert the readings appropriately.

The sensor isn't constrained to this range though. I'm not sure what its range is. So, I'll start by measuring what the display outputs as fully white, and then use those values as the maximums instead of the ones listed on Wikipedia.

Result, measuring: White, Red, Green, Blue in that order.

That's not quite right. It's almost there, but the red reading is way too high to ignore.

I Am Dumb

A bit of thought would have told me that was doomed to failure. That's what I get for rushing off ahead without thinking things through. What's currently being displayed as white, I'm trusting to be truly white. Even a small difference here throws the whole thing out of whack, which is what is going on.

I need to convert the readings to something else first, to normalise the values I'm getting from the sensor. The xyY colour space looks hopeful, as it's described as a normalised representation, and the conversion from XYZ to xyY uses the XYZ values alone.

The Y value is unchanged between XYZ and xyY though, so this still needs to be normalised. The articles goes on to say:

Since the human eye has three types of color sensors that respond to different ranges of wavelengths, a full plot of all visible colors is a three-dimensional figure. However, the concept of color can be divided into two parts: brightness and chromaticity. For example, the color white is a bright color, while the color grey is considered to be a less bright version of that same white. In other words, the chromaticity of white and grey are the same while their brightness differs.

The CIE XYZ color space was deliberately designed so that the Y parameter was a measure of the brightness or luminance of a color. The chromaticity of a color was then specified by the two derived parameters x and y, two of the three normalized values which are functions of all three tristimulus values X, Y, and Z

So, using xyY, the colour is defined by x and y, and the brightness is defined by Y. This means that I can measure the maximum brightness the screen outputs, and use that maximum to convert future Y readings into a 0.0 to 1.0 range.

I can then convert xyY back to XYZ, and end up with XYZ readings measuring between 0.0 and 1.0 I can throw straight into that XYZ to RGB conversion matrix. Seems a bit of a long way around, but could be feasible.

Progress

The changes have been made. Lets fire it up, and see what happens. Here's what I get for measuring White, Red, Green, and Blue after the changes:

This looks very good. Blue seems a little low, so I'll mark that for investigation. Red and Green are almost bang on target though!

It works a bit like this:

1. Capture the peak XYZ reading of full white to get the maximum brightness the display can output.

XYZ peak = new XYZ();
LG_Calibrator_GetXYZ(ref peak.X, ref peak.Y, ref peak.Z, 5);  // Average 5 reading

2. Capture an XYZ reading of a colour on the display.

XYZ xyz = new XYZ();    
LG_Calibrator_GetXYZ(ref xyz.X, ref xyz.Y, ref xyz.Z, 5);

3. Convert from XYZ to Yyx, resulting in a normalised representation of the colour.

Yyx yyx = new Yyx();

yyx.Y = (Math.Min(xyz.Y, peak.Y) / peak.Y); // normalise max brightness to 1.0
yyx.y = xyz.Y / (xyz.X + xyz.Y + xyz.Z);
yyx.x = xyz.X / (xyz.X + xyz.Y + xyz.Z);

return yyx;

4. Convert from Yyx back to XYZ, producing XYZ values in the range 0.0 to 1.0

XYZ xyz = new XYZ();

xyz.X = yyx.x * (yyx.Y / yyx.y);
xyz.Y = yyx.Y;
xyz.Z = (1 - yyx.x - yyx.y) * (yyx.Y / yyx.y);

return xyz;

5. Convert from XYZ to RGB, using the D65 SRGB matrix.

double r = xyz.X * 3.2404542 + xyz.Y * -1.5371385 + xyz.Z * -0.4985314;
double g = xyz.X * -0.9692660 + xyz.Y * 1.8760108 + xyz.Z * 0.0415560;
double b = xyz.X * 0.0556434 + xyz.Y * -0.2040259 + xyz.Z * 1.0572252;

RGB rgb = new RGB();
rgb.R = To255(r);
rgb.G = To255(g);
rgb.B = To255(b);

return rgb;

The To255 function referenced above is very straightforward.

static double To255(double value)
{
    // Negative values not allowed.
    // >255 is, because it's possible for something to be too red/green/blue.
    return Math.Max(0, value * 255);
}

6. Write a couple of lines to the console

Console.WriteLine("X: {0:F3}. Y: {1:F3}. Z: {2:F3}", xyz.X, xyz.Y, xyz.Z);
Console.WriteLine("R: {0:F3}. G: {1:F3}. B: {2:F3}", rgb.R, rgb.G, rgb.B);

Up Next

Well, that's some good stuff done today. The whole XYZ to xyY to XYZ to RGB thing seems a bit weird at first glance, but the maths holds out. The readings from the sensor need to be normalised before anything can be done with them, and going via xyY appears to be the cleanest way to achieve that.

The next step is to turn this random assortment of code into an application that can do the following:

  1. Display a known colour
  2. Get the reading of this colour from the sensor
  3. Calculate the difference between target colour and actual reading
  4. Repeat for each of many reds, blues, greens, and greys.
  5. Display the results in a useful manner

Once the application is capable of doing all that, then it is capable of being used to calibrate my TV!

Comments

Post a Comment!

Join the discussion

Log in or create an account, and start talking!

RGB, XYZ, x86

Four Numbers

Okay, in the last post I finished up with finally getting some readings from the sensor, but having no idea what they meant. Finding relevant information on the internet is tremendously difficult when you don't know what you're looking for!

I ended up on the Wikipedia page for the CIE Colour Space, which describes a lot of different ways of representing colour. I figured I'd do some searching for each of these in relation to colour calibration, the strongest results being related to the XYZ colour space. No results for any various ways of saying "how do I turn four arbitrary values in to XYZ", unfortunately.

Taking a slightly different tack, I figured I would try to find out about the actual sensor chip in the calibrator itself, and whether the sensor output in any way resembles what I'm reading from the USB traffic. Now, I don't know what the sensor is inside of the calibrator, but they're all going to work in a similar way right? Here's hoping.

I rapidly hit upon a list of colour sensor chips, and with their datasheets available. Crack open the information on one of the I2C sensors, and you'll see:

  • It produces four outputs: Red, Green, Blue, and Clear.
  • The outputs are 16bit numbers.

Unfortunately, there's nothing in the datasheet about converting those values to something usable. Making the assumption for now that the LG one works similarly, return to Google to try to find out how it may be done.

One of the results describes an Arduino colour sensor, which actually uses one of the colour sensor chips I'd found previously. And, even better, it shows to to convert from the RGB values to XYZ values! Unfortunately, the conversion is chock full of magic numbers that have no source, are not in the datasheet, and do not appear on Google. I was somewhat hoping that they were known constants or something similar, common across all sensors.

A further result describes the process of calculating the formulae needed to do the RGB to XYZ conversion, using a second sensor to get a set of reference XYZ values. Unfortunately, I don't have a second device with which to get the reference values.

The LG calibration software knows what it's looking at, though, so these magic conversion numbers must be somewhere inside its code...

Dumping

Navigating through the TrueColourFinder directory, I spot something. It's named 'LG_ACB8300.dll'. If that looks familiar, it's because that's the model of calibrator I'm trying to dissect. Lets see what's going on inside this thing.

DUMPBIN is a tool that exists to pull information out of DLL files amongst other things. Lets see what it's exposing to whatever may be using it:

>dumpbin.exe /exports "c:\Program Files (x86)\LG Electronics\TrueColorFinder\bin\LG_ACB8300.dll"

Microsoft (R) COFF/PE Dumper Version 10.00.40219.01
Copyright (C) Microsoft Corporation.  All rights reserved.


Dump of file c:\Program Files (x86)\LG Electronics\TrueColorFinder\bin\LG_ACB8300.dll

File Type: DLL

  Section contains the following exports for LG_ACB8300.dll

    00000000 characteristics
    526E50F3 time date stamp Mon Oct 28 11:56:35 2013
        0.00 version
           1 ordinal base
          15 number of functions
          15 number of names

    ordinal hint RVA      name

          1    0 00002200 LG_Calibrator_CloseWaitButton
          2    1 00002800 LG_Calibrator_DeviceCheck_Signage
          3    2 00001890 LG_Calibrator_DeviceClose
          4    3 00002310 LG_Calibrator_DeviceClose_Signage
          5    4 000028B0 LG_Calibrator_DeviceNumberRead
          6    5 00002830 LG_Calibrator_DeviceNumberWrite
          7    6 000017F0 LG_Calibrator_DeviceOpen
          8    7 000022A0 LG_Calibrator_DeviceOpen_Signage
          9    8 00001F10 LG_Calibrator_GetADC
         10    9 00002930 LG_Calibrator_GetAPIVersion
         11    A 00002960 LG_Calibrator_GetFWVersion
         12    B 00001FB0 LG_Calibrator_GetXYZ
         13    C 00003010 LG_Calibrator_GetXYZ_Signage
         14    D 00001EA0 LG_Calibrator_SetMonitorType
         15    E 000021A0 LG_Calibrator_StartWaitButton

  Summary

        4000 .data
        2000 .rdata
        2000 .reloc
        9000 .text

LG_Calibrator_GetXYZ. This is either going to take RGB numbers and turn them into XYZ values, or it's going to talk direct to the calibrator and return XYZ values. If it's the latter, then everything I've done up until this point could have been replaced by this DLL, and I'm going to kick myself.

Assembly

I don't know much at all about poking around inside compiled applications. I know they're full of assembly, which I have somewhere between little and no understanding of, and not much else. I found a list of debuggers, and from this selected ollydbg.

For clarity, I'll describe what I'm doing and the code I'm looking ay along the way. If nothing else, it will serve as a reminder for the next time I'm doing something similar.

I open up the TrueColorFinder.exe in OllyDbg, and this is displayed. It doesn't actually run right away, I've got to hit the 'Play' icon for it to actually get going.

OllyDbg Window, Paused

I can see the LG_ACB8300.dll module in the Executable Modules window. Double click on it, and the left pane changes to that module. I can now browse through a list of names in the module and navigate to them. I can also set breakpoints from the same window, which I set on LG_Calibrator_GetXYZ; this is indicated by the red highlight.

Module Contents

But, before we actually start this up, lets see if I can work out what's actually going on.

Note: Everything from this point on could be complete bollocks, I'm using Google to read up on x86 assembly as I do this! The sources I found and used are here: 1, 2, 3, 4. Please correct anything that's wrong!

The Beginning

Okay, LG_Calibrator_GetXYZ is called by some code inside a module named CalibrationHandler, so lets go there.

003185CF  |. 8D4424 10      LEA EAX,DWORD PTR SS:[ESP+10]
003185D3  |. 6A 01          PUSH 1
003185D5  |. 8D4C24 0C      LEA ECX,DWORD PTR SS:[ESP+C]
003185D9  |. 50             PUSH EAX
003185DA  |. 8D5424 20      LEA EDX,DWORD PTR SS:[ESP+20]
003185DE  |. 51             PUSH ECX
003185DF  |. 52             PUSH EDX
003185E0  |. FF15 70F03100  CALL DWORD PTR DS:[<&LG_ACB8300.LG_Calibrator_GetXYZ>]

This is the code just before the function call. Parameters to a function are passed on the stack (added using the PUSH instruction). I had to look up what LEA does:

The lea instruction places the address specified by its second operand into the register specified by its first operand. Note, the contents of the memory location are not loaded, only the effective address is computed and placed into the register. This is useful for obtaining a pointer into a memory region.

Right, so it stores the pointer to the address in the register, then pushes that value onto the stack. So, three of the four parameters that LG_Calibrator_GetXYZ takes are pointers. The other one is just the value 1.

Backing up a second, let's sort out what the pointers actually reference.

Local variables are stored on the stack. Space is made for them by simply shifting the stack pointer (ESP) the appropriate distance. In this function, there is:

00318576  |. 83EC 20        SUB ESP,20

Meaning 32 bytes (0x20) of space have been made.

Also, that does say SUB, as in subtract. To clarify, the stack grows downwards from high to low memory, so space is allocated by subtracting from the current address. PUSH is similar, it being short hand for subtracting four bytes from ESP and then inserting the specified value at that address.

The local variables are accessed as positive offsets from ESP. From this I can see that all of the pointers being passed to LG_Calibrator_GetXYZ are pointers to local variables. They're also all initialised to zero, using instructions similar to the following:

00318599  |. C74424 0C 0000>MOV DWORD PTR SS:[ESP+C],0

So, at this point, I'm expecting the result of LG_Calibrator_GetXYZ to be stored using those three pointers. The last parameter is currently of unknown purpose.

Passing Data Around

LG_Calibrator_GetXYZ creates some stack space for local variables, and then calls LG_Calibrator_GetADC passing pointers to the freshly created space as parameters:

003E1FFE  |> 8B45 14        MOV EAX,DWORD PTR SS:[EBP+14]
003E2001  |. 8D4C24 08      LEA ECX,DWORD PTR SS:[ESP+8]
003E2005  |. 50             PUSH EAX
003E2006  |. 8D5424 14      LEA EDX,DWORD PTR SS:[ESP+14]
003E200A  |. 51             PUSH ECX
003E200B  |. 8D4424 20      LEA EAX,DWORD PTR SS:[ESP+20]
003E200F  |. 52             PUSH EDX
003E2010  |. 50             PUSH EAX
...
003E201B     E8 F0FEFFFF    CALL LG_ACB83.LG_Calibrator_GetADC

Something here that I almost missed (I should probably use a bigger font...), is that the first command references an offset to EBP rather than ESP.

After doing some quick searching, I find EBP is known as the base pointer. By convention, EBP takes the value of the the stack pointer as it stands at the start of the function, before ESP is adjusted to make space for local variables.

The currently value of EBP is also pushed on to the stack so that its current value may be restored at the end of the function.

003B1FB0 >/$ 55             PUSH EBP
003B1FB1  |. 8BEC           MOV EBP,ESP

This means that the parameters passed to a function can be accessed as offsets to EBP. It's not quite as straightforward as the first parameter being EBP+4 though.

When a function is called, the first action is to push the return address to the stack so that execution knows where to continue execution from when the called function completes. So, at the start of the called function, ESP is 4 bytes separated from the parameters. Pushing the current value of EBP on to the stack at the start of the function adds another 4 bytes of separation. EBP is then assigned to the current ESP.

At the end of all this, the leftmost parameter is at EBP+8, the next one is at EBP+12, etc.

Okay, back to the code. Starting at EBP+8, there are four parameters. These are at +8, +12, +16, and +20.

The value being assigned to EAX, EBP+14, doesn't match any of these... but I'm thinking in decimal and this is in hex. 0x14 is 20. It's the fourth parameter.

Okay, so much like LG_Calibrator_GetXYZ, the call to LG_Calibrator_GetADC takes three pointers, and the same number that was passed to LG_Calibrator_GetXYZ.

Talking to the Sensor

The LG_Calibrator_GetADC function doesn't do any of the work itself. It farms it out to another function, this new one being unnamed. This new function takes only one parameter, which is the fourth parameter passed into LG_Calibrator_ADC, which is the four parameter passed into LG_Calibrator_GetXYZ, which is 1.

I'm not going to go too far down this path, so I'll have a quick look-see what happens in this function and then make a sweeping assumption about what comes out of it.

This is what I've found:

  • This unnamed function actually does the business. It calls further functions, themselves calling WriteFile and ReadFile.
  • The readings are grabbed from the sensor, and added to an address in the data segment - a global variable.
  • I do mean added; it runs in a loop acquiring readings, until it's done the number of iterations, specified by this function's single parameter.

Once this function has completed, LG_Calibrator_GetADC executes code similar to this for each of the cumulative totals generated:

003B1F2F  |. 8B15 0CE23B00  MOV EDX,DWORD PTR DS:[3BE20C]  ; 1
003B1F35  |. 894C24 04      MOV DWORD PTR SS:[ESP+4],ECX   ; 2
003B1F39  |. DB4424 18      FILD DWORD PTR SS:[ESP+18]     ; 3
003B1F3D  |. 895424 00      MOV DWORD PTR SS:[ESP],EDX     ; 4
003B1F41  |. 8B5424 0C      MOV EDX,DWORD PTR SS:[ESP+C]   ; 5
003B1F45  |. DF6C24 00      FILD QWORD PTR SS:[ESP]        ; 6
003B1F49  |. 894C24 04      MOV DWORD PTR SS:[ESP+4],ECX   ; 7
003B1F4D  |. D8F1           FDIV ST,ST(1)                  ; 8
003B1F4F  |. DD1A           FSTP QWORD PTR DS:[EDX]        ; 9

I've labelled each line, 1 - 9, to clear up what's going on here.

  1. Copy the value contained within the global data at 3BE20C to the EDX register.
  2. Set the value at ESP+4 to be 0. (an 'XOR ECX,ECX' has been previously executed, meaning ECX is 0).
  3. Load the value at ESP+18 into the floating point register ST0. This is the fourth parameter passed in - the iteration count.
  4. Copy EDX on to the stack. (because FILD cannot load from a register, must be from a memory location).
  5. Copy ESP+C, the second parameter (the value of which is a memory address within the stack), into EDX.
  6. Load the value stored on the stack by #4 into the floating point register ST0. The current ST0 moves to ST1.
  7. As with #2, set the value at ESP+4 to be 0.
  8. Divide ST0 by ST1, the result of which is stored in ST0. ST1 is untouched.
  9. Output the value at ST0 to the memory address stored in EDX. This means that the local variable in LG_Calibrator_GetXYZ now contains the result of this calculation.

So, multiple readings are taken, the average of which is used by LG_Calibrator_GetXYZ to calculate the XYZ value.

Once LG_Calibrator_GetADC has done it's magic, a whole mess of floating point code is executed in LG_Calibrator_GetXYZ. I'm not even going to begin to go through it, but I have checked the FSTP operations write out their values to the pointers passed in to the function.

The result of all this is that I am now pretty sure that to call LG_Calibrator_GetXYZ, I pass it three lots of a pointer to double and a number of samples.

Was all that actually right?

Now I've a reasonable idea about how this might work, and what should happen. There's a few things I've assumed are happening, including the assumption that I know what the hell the assembly code is doing. Lets launch the application, step through the code, and see if everything still makes sense.

Just About to Call LG_Calibrator_XYZ

The four items most recently added to the stack are in fact pointers to three other addresses on the stack - local variables - and the value 1. A good start.

Just About to Call LG_Calibrator_GetADC

Once again, four most recent stack items are pointers to three local variables, and the value 1 copied from the parameters.

Just After LG_Calibrator_GetADC

This one confused me briefly, as the pointers passed (eg. 05C1FD08) did not change at all. The address directly after it (05C1FD0C), however, did. The answer is that the values generated by LG_Calibrator_GetADC are doubles, therefore 64 bits. x86 is Little Endian, which means that in memory the least significant bytes are stored before the most significant ones.

Easily proven: reverse the two bytes, bash the hex into this calculator, and get a reasonable answer. Sorted.

Just After LG_Calibrator_GetXYZ

The result here is similar, as the outputs are doubles. This confirms what I thought about how to call it.

Starting Up

One aspect I've missed so far is how the calibrator is initialised by this module. I'll not go into too much detail on this point, as there's not much to it.

There are two functions in the list of exports that look appropriate for initialisation: LG_Calibrator_DeviceOpen, and LG_Calibrator_SetMonitorType.

LG_Calibrator_DeviceOpen does pretty much exactly what you'd expect it to, which is to identify the calibrator and call CreateFile to get a handle with which it can read/write to the device.

LG_Calibrator_SetMonitorType, on the other hand, does something more interesting. The calculations in LG_Calibrator_GetXYZ reference a global variable, which it appears is actually assigned by this function. I'm guessing that some monitors require special handling for the calibrator to produce the right results. There's a switch statement that takes a value between 0 and 3, and sets the global variable to a different value for each. The value switched on is passed in as a parameter.

Investigating where it's called from, to see what would cause the different values to be passed, I find yet another switch statement. This function outputs some debugging information, which means there's some nice helpful strings being assigned inside each switch option. The possible monitor types (and the values passed to LG_Calibrator_SetMonitorType):

  • 0 - MONITOR_IPS7
  • 1 - MONITOR_IPS8
  • 3 - MONITOR_IPS88RGB
  • 2 - MONITOR_IPS9
  • 0 - Default

I stick a breakpoint here in the debugger and then run the LG software to find out what kind of monitor I have. The answer is: Default. I can't actually find any reference to any of those other options anywhere on the internet, so I have no idea what they're for.

At some point in the future I'll re-visit this to find out exactly what the degree of change of that magic value is, and what affect it has on the results.

Back to Civilisation

At this point, I have enough information to import the functions into a C# application and see what happens. These are the signatures I'll be needing:

[DllImport("LG_ACB8300.dll", SetLastError = true, ExactSpelling = true, CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Auto)]
public static extern void LG_Calibrator_DeviceOpen();

[DllImport("LG_ACB8300.dll", SetLastError = true, ExactSpelling = true, CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Auto)]
public static extern void LG_Calibrator_SetMonitorType(int i);

[DllImport("LG_ACB8300.dll", SetLastError = true, ExactSpelling = true, CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Auto)]
public static extern void LG_Calibrator_GetXYZ(ref double x, ref double y, ref double z, uint iterations);

I've made the assumption that the pointers passed in to LG_Calibrator_XYZ are in that same order as the function name, purely because doing it any other way makes no sense. I'm also disregarding all return values for now.

To use this, the LG_ACB8300.dll file needs to be added to the project and set to be copied to the output directory on build. If it's not there, then the application won't be able to import the functions when it runs.

The code to use this is dead simple:

static void Main(string[] args)
{
    double x, y, z;
    x = y = z = 0;

    LG_Calibrator_DeviceOpen();
    LG_Calibrator_SetMonitorType(0);  // 0 = default

    while (true)
    {
        LG_Calibrator_GetXYZ(ref x, ref y, ref z, 1);
        Console.WriteLine("X: {0:F3}. Y: {1:F3}. Z: {2:F3}", x, y, z);
        Thread.Sleep(1000);
    }
}

So, finally, at long last, the result:

The last two readings are a red (255, 0, 0) test image, and my desk. Ignore the desk.

Stick the numbers 48.893, 25.112, and 1.860 into an XYZ to RGB calculator, and the result is pure red. Great success!

The next thing to think about how to turn this fantastic new colour identification capability into something I can use to calibrate some other arbitrary device...

Comments

andyboeh commented on RGB, XYZ, x86:

Based on your initial debugging, I just decoded the floating point magic in the Get_XYZ function. Basically, it does an offset correction, a 3x3 matrix multiplication and another offset correction. Then it applies another correction based on the monitor type (a 3x3 matrix multiplication).

The variables for the offsets and first matrix multiplication are read from the hardware device during initial opening. The monitor type correction matrix seems to be hard-coded into the library.

For now, I have a working ADC->XYZ calculation in Octave, my next step is to implement the communication with the hardware so that the LG library is not necessary anymore.

Snaked replied:

Sounds like you're making some serious progress.

My next plan was to try to do something similar (remove the dependency on the LG library) so I could run it off a Pi and have a portable calibrator for use on anything I wanted.

I'll put in a pin in that until I see where you go with this. Have you considered doing a project for it?

andyboeh replied:

By project, you mean here on incrediBits? Nope, but I put some information on my website (www.aboehler.at/doku/doku.php/projects:acb8300).

I do have a basic patch for ArgyllCMS ready (I'm on Linux, so it should work on the Pi as well), but I doubt that the readings are correct. In a few days I'll have access to another calibrator (a Color Munki device) and I'll cross-check then.

andyboeh replied:

Great success! Today I cross-checked the results with another calibrator, fixed a few bugs and just calibrated my display using ArgyllCMS and the LG calibrator!

The patch for Argyll is located on the website from my previous post. This should allow you to use your Pi for calibrating about everything.

Snaked replied:

That's excellent! Fantastic work. You've taken it far beyond what I achieved here. I'll look into moving forward with the portable calibrator (Pi, ACB8300, HDMI cable, and a battery pack!) soon.

Were you able to identify the actual purpose of the monitor types and their matrices?

andyboeh replied:

Thanks, but I built heavily on your work. As I didn't know x86 assembler either, I used your OllyDbg guide to dig into this :) And given the fact that you already had the function signatures, it was easy to have a starting point for debugging.

The version of True Color Pro that I debugged has more monitor types in it than yours (seven). The purpose of the matrices is to adapt the calibrator to different display technologies and specific models. Types 0 through 3 are for generic screens with Type 1 being a simple unity matrix. Types 4 through 6 have specific LG display names. Unfortunately, I don't know the meaning of MONITOR_IPS7, either.
I need to trace the DLL once more because I forgot to write down the specific names. The values of all the matrices are in the patch for Argyll, so you can easily experiment with every type.

Respond

Join the discussion

Log in or create an account, and start talking!

Post a Comment!

Join the discussion

Log in or create an account, and start talking!

USB Lyrebird (cont.)

Last time, I had a fairly frustrating experience trying to get some messages going back and forth between my PC and the LG Calibrator. I'm hopeful that, now I've proven the basic mechanism, things will go a bit more easily.

Tidying Up

Lets break things down a bit, and put them back together in a more sensible fashion.

The messages I send to the LG Calibrator are all 43 bytes long, and the last 42 bytes are always zero. Having a 43 byte array defined in its entirety is pretty pointless. It can easily be replaced with this function, which creates a message for me:

static byte[] GetCommandMessage(byte command)
{
    // Byte arrays initialise to 0x00 by default.
    byte[] message = new byte[43];
    message[0] = command;

    return message;
}

The initialisation process consists of sending 7 messages one after the other. I have no idea what their responses mean, and I plan on ignoring them for now. So, I'll just stick 'em in an array and send them to the calibrator one after the other:

static bool InitialiseCalibrator(USBInterface driver)
{
    // Don't bother checking responses at this point, just blast all the messages to the calibrator.
    byte[] initialisation = new byte[] { 0x01, 0x51, 0x52, 0x54, 0x55, 0x80, 0x05 };
    foreach (byte command in initialisation)
    {
        Thread.Sleep(100);  // Short wait between each

        Console.Write("Sending: " + command.ToString("X2") + ".");
        if (driver.write(GetCommandMessage(command)))
        {
            Console.WriteLine(" OK.");
        }
        else
        {
            Console.WriteLine(" Failed.");
            return false;
        }
    }

    return true;
}

The receipt event is tidied up a bit too. The odd first byte is removed, and if it's a reading response (they start 0x32) then a function is called to handle the reading. It now looks like:

driver.enableUsbBufferEvent((sender, eventArgs) =>
{
    USBHIDDRIVER.List.ListWithEvent events = sender as USBHIDDRIVER.List.ListWithEvent;
    if (events != null)
    {
        foreach (byte[] received in events)
        {
            // Remove the first byte, which is not actually part of the message.
            byte[] bytes = received.Skip(1).ToArray();  

            Console.WriteLine(BitConverter.ToString(bytes));
            if (bytes[0] == 0x32)
            {
                HandleReading(bytes);
            }
        }
    }
});

HandleReading just writes its own name to the console for now. I'll come to that later, when things are behaving themselves.

Okay, almost ready to roll. One more update to bring the initialisation and readings together. The main loop of the application becomes:

if (InitialiseCalibrator(driver))
{
    while (true)
    {
        // Get a reading from the calibrator
        driver.write(GetCommandMessage(0x31));
        Thread.Sleep(1000);
    }
}
else
{
    Console.WriteLine("Initialisation failed.");
}

Going Backwards

You know how I said things should go more easily? Yeah, no. This is what I see when I launch the application:

It just ... sits there. Doesn't fail, just hangs forever. Going into the guts of the library again, I find it's waiting in HidD_GetPreparsedData. If this looks familiar, it's because it's one of the things I fiddled with last time around. I know this used to work, so I try a couple of different USB ports in case something odd is going on, unfortunately to no effect.

OK, better start winding back through time to see what exactly has caused it to break.

Long story short, you see that main loop up there? I'm calling driver.startRead just before it starts. Makes sense, right? Get the read thread running in the background so it can pick up the responses to anything I send? Nope! I don't understand the full reasoning, but when the read thread is trying to ReadFile, HidD_GetPreparsedData will block.

And it's trying to ReadFile all the time, blocking until data is received. But it will not receive data, because we can't send a message to the device. Deadlock!

From the previous implementation, it's possible to see that any data the device transmits is buffered until something reads it. This means I can make the read and write operations take turns. Send a message, activate the read thread, wait a bit, process any incoming messages, kill the read thread, repeat. It's a bit of a bodge, but it'll be in good company.

static void CycleReadThread(USBInterface driver)
{
    // The read thread blocks writing to the port. So, the read thread is toggled bewteen writes
    // to collect any messages that have been sent.
    driver.startRead();
    Thread.Sleep(1000);
    driver.stopRead();
}

Tuck a call to CycleReadThread just before the end of InitialiseCalibrator to collect the initialisation messages, and again just after driver.write(GetCommandMessage(0x31)) in the main loop.

The moment of truth:

Okay, that's a number of responses significantly in excess of what I sent. What? Oh.

The response messages are '03 03 53 03 53 53 03 53 53 53'. See the pattern? How about if I write it like this: '03, 03 53, 03 53 53, 03 53 53 53'? Each new message received is added to a list that's not being cleared between. I just assumed that the event would simply contain the things that happened since the last event. My bad, easily fixed by just wiping the 'events' list when I'm done with it.

Much better!

Blocked

Well, that jubilation was short lived. I successfully initialise the calibrator, and receive the messages it sends me in the process. The messages I try sending to actually get readings, though, they just don't seem to do anything.

Once again, it's this whole ReadFile thing. Stopping the read thread doesn't stop the read thread. It's asked nicely to stop, but because it's currently busy in the middle of ReadFile, the request is ignored. So, we're back to square one.

Back to the MSDN to find out more about ReadFile, and if there's some (hopefully straightforward!) way to make it time out rather than blocking forever.

When reading from a communications device, the behavior of ReadFile is determined by the current communication time-out as set and retrieved by using the SetCommTimeouts and GetCommTimeouts functions.

This looks hopeful. This thing is totally a communications device, right?

A quick Google later (hello stackoverflow), I add the following signature:

[DllImport("kernel32.dll", EntryPoint = "SetCommTimeouts", SetLastError = true)]
public static extern bool SetCommTimeouts(int hFile, CommTimeouts timeouts);

Which takes the file handle and a collection of time-out values, defined as follows:

class CommTimeouts
{
    public UInt32 ReadIntervalTimeout;
    public UInt32 ReadTotalTimeoutMultiplier;
    public UInt32 ReadTotalTimeoutConstant;
    public UInt32 WriteTotalTimeoutMultiplier;
    public UInt32 WriteTotalTimeoutConstant;
}

The MSDN page for the CommTimeouts structure, more specifically the ReadIntervalTimeout field, says:

The maximum time allowed to elapse before the arrival of the next byte on the communications line, in milliseconds. If the interval between the arrival of any two bytes exceeds this amount, the ReadFile operation is completed and any buffered data is returned.

So, if I create it like this:

CommTimeouts timeouts = new CommTimeouts();
timeouts.ReadIntervalTimeout = 100;
timeouts.ReadTotalTimeoutConstant = 0;
timeouts.ReadTotalTimeoutMultiplier = 0;
timeouts.WriteTotalTimeoutConstant = 0;
timeouts.WriteTotalTimeoutMultiplier = 0;

Then the ReadFile call should give up after 100ms if it doesn't get anything.

Great! Now I just need to get the file handle so I can call SetCommTimeouts on it.

The handle is stored as a public member in an instance of the USBSharp class. This instance is created by HIDUSBDevice when it is instantiated, and stored in a private member named myUSB. The HIDUSBDevice instance itself is created by the USBInterface class, also when it is instantiated, also stored in a private member, but this one's named usbdevice.

The simple solution would be to simply change 'private' to 'public' for those two members. I mean, I've touched the code in the library once already. The difference is, the previous edit was for a good reason (fixing an issue, making it work properly). Changing private to public just for my convenience would be gross and wrong. And, there's another way, which is still sorta gross, but keeps its grossness self-contained in my code. Reflection!

In basic terms, reflection lets you poke around inside the contents of arbitrary objects at run-time. You can get a list of their contents, invoke their functions, mess with their data. All without necessarily knowing what exactly you actually want to touch when the application is built. Most importantly, for my purposes, it even lets you read the content of private member variables.

So, as if by magic, I have access to the USBSharp instance.

// Get the HIDUSBDevice from the USBInterface, using reflection to side-step the 'private'
Type uiType = typeof(USBInterface);
FieldInfo usbDeviceField = uiType.GetField("usbdevice", BindingFlags.NonPublic | BindingFlags.Instance);
HIDUSBDevice device = usbDeviceField.GetValue(driver) as HIDUSBDevice;

// Same again for the USBSharp in HIDUSBDevice
Type hudType = typeof(HIDUSBDevice);
FieldInfo usbSharpField = hudType.GetField("myUSB", BindingFlags.NonPublic | BindingFlags.Instance);
USBSharp usbSharp = usbSharpField.GetValue(device) as USBSharp;

And thus:

// The file handle is public in USBSharp, so now we can just set the timeouts.
if (!SetCommTimeouts(usbSharp.HidHandle, timeouts))
{
    Console.WriteLine(Marshal.GetLastWin32Error());
}

I do this on start-up, straight after the handle is created. The result?

And what does '1' mean? ERROR_INVALID_FUNCTION. Looks like it's not a communications device after all.

Death to I/O

Back to the drawing board. And by drawing board I mean the MSDN ReadFile page. One of the related items is CancelIo, which looks like it could be interesting. Unfortunately, it only works for the calling thread, which is not the one I'd be calling it from. CancelIoEx, on the other hand, does not discriminate and will take out all IO for that handle, regardless of thread.

So, new plan: send a message, activate the read thread, wait a bit, process any incoming messages, kill the read thread, kill all IO on that file handle. Nuke it from orbit, etc.

Signature:

[DllImport("kernel32.dll")]
static extern bool CancelIoEx(int hFile, int lpOverlapped);

I grabbed all the reflection stuff into a function named CancelUSBIO but replaced the time-out related code with:

if (!CancelIoEx(usbSharp.HidHandle, 0))
{
    Console.WriteLine(Marshal.GetLastWin32Error());
}

CycleReadThread is now:

static void CycleReadThread(USBInterface driver)
{
    // The read thread blocks writing to the port. So, the read thread is toggled bewteen writes
    // to collect any messages that have been sent.
    driver.startRead();
    Thread.Sleep(1000);
    driver.stopRead();
    CancelUSBIO(driver);
}

And the result:

That's initialisation, read responses, and calls to the HandleReading function. Now I can, at long last, finally get on to trying to understand what it's trying to tell me!

16 or 32

If using zero-indexing, only bytes 1-8 (inclusive) actually do anything. I've tried pointing the calibrator at all manner of things whilst this has been running and none of the other values have ever changed. I reckon this is either going to be two 32-bit numbers, or four 16-bit numbers. I'll interpret it both ways and see if anything looks particularly wrong (or right, as the case may be).

Here's my HandleReading function:

static void HandleReading(byte[] reading)
{
    // Only the 8 bytes from 1 to 8 inclusive do anything useful.
            
    // Turn them into two 32-bit numbers.
    int A32 = BitConverter.ToInt32(reading, 1);
    int B32 = BitConverter.ToInt32(reading, 5);

    Console.WriteLine("32:");
    Console.WriteLine("A: " + A32.ToString());
    Console.WriteLine("B: " + B32.ToString());

    // Could also be four 16-bit numbers
    int A16 = BitConverter.ToInt16(reading, 1);
    int B16 = BitConverter.ToInt16(reading, 3);
    int C16 = BitConverter.ToInt16(reading, 5);
    int D16 = BitConverter.ToInt16(reading, 7);

    Console.WriteLine("16:");
    Console.WriteLine("A: " + A16.ToString());
    Console.WriteLine("B: " + B16.ToString());
    Console.WriteLine("C: " + C16.ToString());
    Console.WriteLine("D: " + D16.ToString());
}

Console output, including waving the calibrator around a little:

I think the 32 bit numbers are fully insane, whereas the 16 bit values look far more reasonable. The ones you can see right at the top were pointing at my screen, the next two came in as I was putting it down on the desk.

Colours

I'm not going to get much more done in this session, but I figure I may as well point the calibrator at a bunch of significant colours and see if anything leaps out at me. I decided to try all of the permutations of RGB with one, two, or all three of the channels at 255. These colours, specifically:

Tabulating the results for each of ABCD for each of those colours as read by the calibrator from my screen, I get:

RedGreenBlueMagenta (R+B)Cyan (G+B)Yellow (R+G)White
A12718894219281313405
B47881214126412981311342
C5042409319818272729003222
D126413225941848191625803166

There does seem to be some kind of correlation between the colours and the activation of B, C, and D. I'm not sure what A is, some kind of overall brightness measure perhaps? I'm also not sure what kind of scale it's operating on, how to know how the blue on my screen compares to what I guess would be the bluest thing ever at 65535.

I'll need to do some research and reading up on calibrators in general I think. But, considering how long it's taken me to get to this point, that's a job for another day. I am rather pleased with the progress so far though!

Code Listing

Just for completeness, this is the entire code listing (so far) for the 'USB Lyrebird' application:

using System;
using System.Linq;
using USBHIDDRIVER;
using System.Threading;
using System.Runtime.InteropServices;
using System.Reflection;
using USBHIDDRIVER.USB;

namespace Lyrebird
{
    class Program
    {
        [DllImport("kernel32.dll")]
        static extern bool CancelIoEx(int hFile, int lpOverlapped);

        static void Main(string[] args)
        {
            Console.Title = "USB Lyrebird";
            USBInterface driver = new USBInterface("vid_043e", "pid_9af0");

            // Handler for ctrl+c.
            Console.CancelKeyPress += delegate { Exit(driver); };

            // Anonymous function as a callback for the data receipt event
            driver.enableUsbBufferEvent((sender, eventArgs) =>
            {
                USBHIDDRIVER.List.ListWithEvent events = sender as USBHIDDRIVER.List.ListWithEvent;
                if (events != null)
                {
                    foreach (byte[] received in events)
                    {
                        // Remove the first byte, which is not actually part of the message.
                        byte[] bytes = received.Skip(1).ToArray();

                        Console.WriteLine(BitConverter.ToString(bytes));
                        if (bytes[0] == 0x32)
                        {
                            HandleReading(bytes);
                        }
                    }

                    // The events are persistent, so empty the list.
                    events.Clear();
                }
            });

            // Start it up and start reading.
            if (InitialiseCalibrator(driver))
            {
                while (true)
                {
                    // Collect any messages
                    CycleReadThread(driver);

                    // Get a reading from the calibrator
                    driver.write(GetCommandMessage(0x31));
                }
            }
            else
            {
                Console.WriteLine("Initialisation failed.");
                Console.ReadLine();
                Exit(driver);
            }
        }

        static void HandleReading(byte[] reading)
        {
            // Only the 8 bytes from 1 to 8 inclusive do anything useful.

            // Turn them into two 32-bit numbers.
            int A32 = BitConverter.ToInt32(reading, 1);
            int B32 = BitConverter.ToInt32(reading, 5);

            Console.WriteLine("32:");
            Console.WriteLine("A: " + A32.ToString());
            Console.WriteLine("B: " + B32.ToString());

            // Could also be four 16-bit numbers
            int A16 = BitConverter.ToInt16(reading, 1);
            int B16 = BitConverter.ToInt16(reading, 3);
            int C16 = BitConverter.ToInt16(reading, 5);
            int D16 = BitConverter.ToInt16(reading, 7);

            Console.WriteLine("16:");
            Console.WriteLine("A: " + A16.ToString());
            Console.WriteLine("B: " + B16.ToString());
            Console.WriteLine("C: " + C16.ToString());
            Console.WriteLine("D: " + D16.ToString());
        }

        static void CycleReadThread(USBInterface driver)
        {
            // The read thread blocks writing to the port. So, writes are sent, and then the read thread is toggled
            // to collect any messages that have been sent.
            //
            // The stopRead() function should abort the read thread but doesn't if it's currently blocked inside
            // ReadFile. The thread being blocked inside ReadFile means that writes will not go through. To work
            // around this, any current IO is cancelled after the thread is aborted in order to get it to exit
            // properly.

            // Collect messages
            driver.startRead();
            Thread.Sleep(1000);
            driver.stopRead();
            CancelUSBIO(driver);
        }

        static void CancelUSBIO(USBInterface driver)
        {
            // Get the HIDUSBDevice from the USBInterface, using reflection to side-step the 'private'
            Type uiType = typeof(USBInterface);
            FieldInfo usbDeviceField = uiType.GetField("usbdevice", BindingFlags.NonPublic | BindingFlags.Instance);
            HIDUSBDevice device = usbDeviceField.GetValue(driver) as HIDUSBDevice;

            // Same again for the USBSharp in HIDUSBDevice
            Type hudType = typeof(HIDUSBDevice);
            FieldInfo usbSharpField = hudType.GetField("myUSB", BindingFlags.NonPublic | BindingFlags.Instance);
            USBSharp usbSharp = usbSharpField.GetValue(device) as USBSharp;

            // The file handle is public in USBSharp
            if (!CancelIoEx(usbSharp.HidHandle, 0))
            {
                Console.WriteLine(Marshal.GetLastWin32Error());
            }
        }

        static byte[] GetCommandMessage(byte command)
        {
            byte[] message = new byte[43];
            message[0] = command;

            return message;
        }

        static bool InitialiseCalibrator(USBInterface driver)
        {
            // Don't bother checking responses at this point, just blast all the messages to the calibrator.
            byte[] initialisation = new byte[] { 0x01, 0x51, 0x52, 0x54, 0x55, 0x80, 0x05 };
            foreach (byte command in initialisation)
            {
                Thread.Sleep(100);  // Short wait between each

                Console.Write("Sending: " + command.ToString("X2") + ".");
                if (driver.write(GetCommandMessage(command)))
                {
                    Console.WriteLine(" OK.");
                }
                else
                {
                    Console.WriteLine(" Failed.");
                    return false;
                }
            }

            // Collect the initialisation responses
            CycleReadThread(driver);

            return true;
        }

        static void Exit(USBInterface driver)
        {
            driver.stopRead();
            Environment.Exit(0);
        }
    }
}

Comments

Jack commented on USB Lyrebird (cont.):

Hi Snaked, very interesting, and very complicated! Interesting that the device seemed to pick up more of what appears to be 'Red' when placed over yellow than it did when placed over red! I would have suggested that it could be CMYK, but that doesn't really make sense with the values you got or for it being used with a screen calibrator, as CMKY is used more in printing.

Snaked replied:

I spent a while looking in to it, and the answer was not as simple as I'd have liked.

I've just made another post explaining it in detail, but the short version is that A, B, C, D (probably!) correlate to Clear, Blue, Green, Red readings from the sensor chip in the calibrator.

Converting this to something useful is a whole 'nother kettle of fish!

Jack replied:

Clear? sounds like an odd value, I will have a read of your post!

Respond

Join the discussion

Log in or create an account, and start talking!

Post a Comment!

Join the discussion

Log in or create an account, and start talking!

USB Lyrebird

The Lyrebird is an Australian bird that has a ridiculous talent for mimicking things it's heard, which is pretty much what I'm trying to do now; mimic the noises made by the LG software and hope that the calibrator thinks it's talking to the real deal.

The messages I need to send are detailed in my previous post, so this is all about how we get to that state.

HID USB Library

I'm on Windows (you may have guessed this by now!), and this seems like a C# kinda job. A short amount of time on Google, and I find something that looks like it'd be useful: The HID USB Driver. The documentation is a little sparse, but the source for it is right there, and there's enough to be getting started with.

First up, lets find the device I'll need to messing around with. Crack open Device Manager and I'm faced with a wall of 'HID-compliant device'. Despite the calibrator identifying itself as 'LG Calibrator', there's no useful description available in the properties of any of them.

From the days of hunting down drivers for weird and wonderful devices, I know there's a 'Hardware Id' section which describes the thing Device Manager is listing:

The 'VID_XXXX' is a vendor ID, and the 'PID_XXXX' is a product ID. Chuck the combination into Google and you find out what you're looking at. Though, in this case, that fails miserably! There were no results at all for VID_043E&PID_9AF0, but VID_043E brought back LG-related things, so I'll assume it's this one.

Lets just do a quick test to make sure that the device can be found by the library.

USBInterface driver = new USBInterface("vid_043e", "pid_9af0");
string[] devices = driver.getDeviceList();

foreach (string device in devices)
{
    Console.WriteLine(device);
}

Result:

Well, this means basically nothing to me, but it proves that it's found it!

A slight modification to the code in order to send the first message I logged to the calibrator. The byte array was generated simply from copying the colon-separated list from the last post and replacing ':' with ', 0x'.

USBInterface driver = new USBInterface("vid_043e", "pid_9af0");

if (driver.Connect())
{
    Console.WriteLine("Connected!");                

    byte[] hello = new byte[] { 0x01,
        0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
        0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
        0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
    };
    
    if (driver.write(hello))
    {
        Console.WriteLine("Sent!");
    }
    else
    {
        Console.WriteLine("Failed!");
    }
}

And, failure. I see 'Connected!' followed by 'Failed!'. Using the USB sniffer reports a complete failure to transmit; the message I tried to send doesn't show up at all.

Down the Rabbit Hole

This was a fairly long and convoluted issue, with a few dead ends along the way.

The first thing to try is of course running the application with administrative privileges, in case there was some issue with writing to the device as a normal user. Unfortunately, this was not the resolution.

So, better find out where exactly it's falling down. As it turns out, there's a fairly lengthy chain of events behind the simple write call, culminating in a call to the Win32 API in the form of the WriteFile function. And it's this call that was failing. It was simply returning 0 (indicating failure), with no further explanation.

I did some searching, and reading through the comments on this page, coming across a comment stating that two handles were being opened to the USB device, with writes to the second handle failing. Another comment slightly further on reported that they couldn't see the same issue, but I figured it was worth investigating anyway.

As it turns out, the first commenter was right. The constructor for the USBInterface object creates a HIDUSBDevice object, this new object actually doing all the grunt work to talk to the USB device. The constructor for the HIDUSBDevice calls connectDevice(), which creates a file handle using the CreateFile Win32 API call. The USBInterface.Connect() function ALSO calls HIDUSBDevice.connectDevice(), this means that following the provided example code and calling USBInterface.Connect() after instantiating the USBInterface does in fact result in two CreateFile calls and two file handles. Simply removing the call to USBInterface.Connect() resolves this, but does not fix the failure to write.

Okay, lets see what Microsoft says about the WriteFile function. There was not much of obvious value, until I came across the words "To get extended error information, call the GetLastError function". A short Google later, and I find that the best way to do this in C# is to call Marshal.GetLastWin32Error. Output this to the console, and lets see what's going on. My 'else' block becomes

else
{
    Console.WriteLine("Failed!");
    Console.WriteLine(Marshal.GetLastWin32Error());
}

and the number '1784' appears under 'Failed!'

Now we're getting somewhere! Or not, as 1784 is "The supplied user buffer is not valid for the requested operation" which means absolutely nothing to me. Back to Google once again!

I read a lot of words written by a lot of different people. Most was related to C++, but the general gist of what I was reading was along the lines of:

  • The data buffer is smaller than the number of bytes that it's being asked to write.
  • The final two parameters in the function have been set incorrectly.
  • The data type of the buffer is different that expected, with hints of 32/64 bit confusion.

I could confirm through debugging that the buffer was the same size as it was being asked to write, so it wasn't that. The Microsoft documents confirmed that the parameters were being set correctly. So, perhaps there is something odd with the byte array I'm throwing around?

The pinvoke.net page for WriteFile confirmed that the 'byte' type was valid, though the library was passing it slightly differently. I modified the signature (and the code that called it) to match that on pinvoke but there was no change. Pretty much grasping at straws now, I spent some time setting the project types to all possible combinations of 'Any CPU', 'x86', and 'x64' to see if it made any difference. Nope, still failing, still with the same error!

I've been bashing away at this for a fairly significant amount of time now, so a tea break is in order before returning to the task of scouring the internet for anything at all relevant.

Light Dawns

I stumbled across one thing, one post on a forum that I can't even find again when deliberately searching for it, that mentioned a similar issue in transmitting data to a Wiimote over bluetooth. The gist of their issue was that it would only accept data of the exact right length for the command being sent, but the commands are variable length. The device was reporting to Windows that its output length was a fixed size, and the WriteFile function would fail if they attempted to send fewer bytes than the reported output length. This doesn't answer my question, but does give me some more information about what could be going on.

Armed with some new search terms, I discovered that some USB devices will not accept messages if they do not match their strict criteria of message length. Others are more accepting.

The array I'm trying to send is 43 bytes in length, the exact same size as the messages captured previously. However, when they reach the WriteFile call, the array is magically 65 bytes long. Well, this isn't quite what I asked for, so lets work out where it's coming from. Tracing through the chain of functions behind write, I find this:

// myUSB.CT_HidD_GetPreparsedData(myUSB.HidHandle, ref myPtrToPreparsedData);
// int code = myUSB.CT_HidP_GetCaps(myPtrToPreparsedData);

int outputReportByteLength = 65;

Well, there's the 65! And some odd commented out code.

I remember seeing reference to 'CAPS' (capabilities) on one of the pages I skimmed. myUSB.CT_HidP_GetCaps goes on to call a windows function, HidP_GetCaps. The documentation for this function states that it returns a HIDP_CAPS structure, which in turn contains the field OutputReportByteLength.

I'm not sure why the output size is hardcoded to 65, but it's late and I have nothing to lose, so I modify the code to read:

myUSB.CT_HidD_GetPreparsedData(myUSB.HidHandle, ref myPtrToPreparsedData);
int code = myUSB.CT_HidP_GetCaps(myPtrToPreparsedData); 
int outputReportByteLength = myUSB.myHIDP_CAPS.OutputReportByteLength;

Hit run, and:

Success! That took a lot more damned effort than it should have done, but it is at least now dropping ones and noughts onto the wire.

It does set the number of bytes to be output to 44 though, which is one more than the 43 I'm sending. Reading through the code shows a single byte being inserted before my message in one of the intermediate functions, with the comment "Store the report ID in the first byte of the buffer". I don't know what that means, and I'm not sure that I care too much right now. I'll look into it later.

And it Reflects

Implementing the code to read data from the device involves a few more lines of code, most of which are documented. The library attempts to continually read data from the device in a separate thread, raising an event with the new data whenever a complete message is received.

I'll need do some tidying up of the code before going much further, as things are a little all over the place and it's only going to get worse. The code now reads:

USBInterface driver = new USBInterface("vid_043e", "pid_9af0");
Console.WriteLine("Connected!");

// Anonymous function as a callback for the data receipt event
driver.enableUsbBufferEvent((sender, eventArgs) =>
{
    USBHIDDRIVER.List.ListWithEvent events = sender as USBHIDDRIVER.List.ListWithEvent;
    if (events != null)
    {
        foreach (byte[] bytes in events)
        {
            Console.WriteLine(BitConverter.ToString(bytes));
        }
    }
});

byte[] hello = new byte[] { 0x01,
    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
};

if (driver.write(hello))
{
    Console.WriteLine("Sent!");
}
else
{
    Console.WriteLine("Failed!");
    Console.WriteLine(Marshal.GetLastWin32Error());
}

driver.startRead();
Thread.Sleep(1000);
driver.stopRead();
            
            
// Wait for an enter keypress, to stop the window closing instantly.
Console.ReadLine();

And, more importantly, the output:

I'm assuming,as there's 44 bytes rather than the expected 43, that the additional inserted the 0x00 at the front is a report ID like the outgoing message. The response does not exactly match the message from yesterday, but there is a lot of similarity. This could be normal, I only have one message to compare it against!

The most important thing, however, is that it responded at all! Woohoo!

More Messages

It's taken a tremendous amount more time than I would have liked to get to this point. I was hoping to have the full set of messages bouncing backwards and forwards by now, and to be starting to work towards interpreting the values it sends. That will instead be the next chunk of effort, as all of this has rather been hard going and I'm well in need of a break.

Comments

Post a Comment!

Join the discussion

Log in or create an account, and start talking!

Talk To Me

Eavesdropping

One of the things I am aware of is that USB talks using packets; messages are sent as discrete chunks of data to and from the device. So, with nothing else to go on, I'll have a go at finding out what exactly is happening when the calibration is running.

So, first up, find a way of sniffing the traffic.

One of my favourite tools is Wireshark, it's pretty much the best thing I've ever used for collecting and analyzing network traffic. A quick search brings me to a page on their wiki explaining that, whilst it can't capture it directly, there's a tool called USBPcap which can dump USB traffic into a file format that can be loaded into Wireshark after the dump.

USBPcap

The installation was a standard affair, so I'll say no more on that. The application itself runs in the console, and needs elevated privileges. To stop Windows bugging me every time I ran it - which got boring quick after the first couple goes - I used an elevated command prompt.

Running USBPcapCMD.exe results in this:

That's a list of all the USB things connected to my PC. I've got a USB game controller, a generic 'HID-compliant device', a keyboard, and my monitor's USB hub. That hub has a 'TUSB3410 Device' which I don't recognise, but I'm pretty sure is the monitor itself (the calibration software's got to talk to it somehow!), my mouse, and my headset. I reckon the generic device on Port 2 is the calibrator.

Not that it makes much difference, because my options to filter by are 1, 1, or 1. Which is everything. Selecting the filter then prompts for a filename, after which the capture starts, with ctrl-c to stop it.

Bash the keyboard, move the mouse, and then open the dump with Wireshark to see what it looks like.

Well, it's got some data. The Source addresses do not in any way relate to the port numbers listed by USBPcap, and I have no idea what they are.

I unplugged things one by one (after spending way too long trying to figure it out without crawling under the desk), and re-ran the capture each time. Here's the results:

  • 18.1: USB Game Controller (I left this unplugged due to the level of noise it makes)
  • 1.1: Keyboard
  • 9.1: Mouse

For some reason, I couldn't get my headset to produce anything in the capture. It's not going to get in the way, so I'll leave it alone for now.

Calibration

The basic premise is sound, time to hook up the calibrator and see what kind of noise it makes when the LG software drives it.

A new device shows up in the capture, 10.1. This I believe to be the monitor, partially because of the timing of the messages, but primarily because it pretty much says that's what it is in the messages it sends:

Shortly after these two comes:

Searching the internet for 'mccs' brings me to Monitor Control Command Set, and that text starting 'prot(monitor)' is the capabilities string being sent in response to a query from the LG software.

I tried this a few times. The software tries to read a few test patches on the screen to work out if the calibrator is there or not, so there should be something in the log from when it's busy doing that. But, no luck. The 10.1 device is not the calibrator and that's the only new device. There's no evidence of the calibrator in any of the logs whatsoever. Sod it!

Calibration Continued

I figured I'd try the same thing again but this time start with the calibrator unplugged. I'll plug it in after starting the capture, meaning I should be able to see the Windows say hello to the device and therefore find out what port it's on.

From looking at the log post-capture, it seems that windows asks every connected USB device for a 'DESCRIPTOR' when you plug a new device in. I found this response:

It says 'LG Calibrator'. It's probably the 'LG Calibrator'. Now I can filter the entire log for this device ('usb.device_address == 16' in the filter bar) and see if I can figure out anything about how it works from the messages that have been passed around.

Message in a Packet

The calibrator seems to go through two stages. Initially, it receives (and responds to) a few different messages from the host, then settles into a cycle where it's being sent the same message over and over again which prompts a response. I'm assuming this is an initialisation process, followed by retrieving the actual readings.

This is what appears to be the initialisation process:

>> From Host to Device
<< From Device to Host

>> 01:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< 03:a9:09:00:20:0c:00:0e:00:5b:00:73:00:18:01:02:a2:3c:80:19:94:4a:47:0a:00:1e:46:68:c3:40:15:40:11:88:34:00:06:20:86:2d:84:49:04

>> 51:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< 53:36:21:8a:57:44:10:a7:3f:fa:3b:fc:08:d6:6e:78:bf:80:b2:ca:20:e2:60:65:bf:9c:31:77:9a:04:05:81:3f:31:ba:4a:b5:54:01:9f:3f:49:04

>> 52:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< 53:60:49:98:40:f4:c0:75:bf:b0:c7:fc:29:c0:c3:75:bf:f8:4e:dc:97:bb:8c:59:3f:a0:57:6b:06:4f:b5:b8:3f:31:ba:4a:b5:54:01:9f:3f:49:04

>> 54:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< 53:00:00:00:00:00:00:2c:40:00:00:00:00:00:00:30:40:00:00:00:00:00:00:28:40:00:00:00:40:1f:07:d1:3f:00:00:00:c0:00:a0:d1:3f:49:04

>> 55:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< 53:00:00:00:60:17:fb:df:3f:dd:00:00:00:00:00:30:40:00:00:00:00:00:00:28:40:00:00:00:40:1f:07:d1:3f:00:00:00:c0:00:a0:d1:3f:49:04

>> 80:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< 88:64:0c:09:0c:17:fb:df:3f:dd:00:00:00:00:00:30:40:00:00:00:00:00:00:28:40:00:00:00:40:1f:07:d1:3f:00:00:00:c0:00:a0:d1:3f:49:04

>> 05:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< 03:64:0c:09:0c:17:fb:df:3f:dd:00:00:00:00:00:30:40:00:00:00:00:00:00:28:40:00:00:00:40:1f:07:d1:3f:00:00:00:c0:00:a0:d1:3f:49:04

>> 06:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< No Response

I have no idea what any of those are doing!

This seems to be how the current colours are read:

>> From Host to Device
<< From Device to Host

>> 31:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< 32:a4:03:eb:0c:3c:1d:51:1c:dd:00:00:00:00:00:30:40:00:00:00:00:00:00:28:40:00:00:00:40:1f:07:d1:3f:00:00:00:c0:00:a0:d1:3f:49:04

>> 31:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< 32:02:00:09:00:0d:00:0d:00:dd:00:00:00:00:00:30:40:00:00:00:00:00:00:28:40:00:00:00:40:1f:07:d1:3f:00:00:00:c0:00:a0:d1:3f:49:04

>> 31:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< 32:f9:03:c9:0d:de:1f:02:1f:dd:00:00:00:00:00:30:40:00:00:00:00:00:00:28:40:00:00:00:40:1f:07:d1:3f:00:00:00:c0:00:a0:d1:3f:49:04

... skip a few ...

>> 31:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< 32:4d:02:01:08:6d:12:f9:11:dd:00:00:00:00:00:30:40:00:00:00:00:00:00:28:40:00:00:00:40:1f:07:d1:3f:00:00:00:c0:00:a0:d1:3f:49:04

>> 31:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
<< 32:4d:02:01:08:70:12:f9:11:dd:00:00:00:00:00:30:40:00:00:00:00:00:00:28:40:00:00:00:40:1f:07:d1:3f:00:00:00:c0:00:a0:d1:3f:49:04

Looking at the responses, the last 34 bytes don't change at all. There's a good 15 seconds skipped in the middle, and the colour under the sensor definitely changed in that time. I would have expected them to change if they were going to. The first byte just indicates the message type, so there are actually only 8 bytes that change in the 43 byte message.

That doesn't make a huge amount of sense to me, seems like a lot of wasted message. Maybe it will become clear later.

What's Next

For one, time is getting on and I have things I should be doing. I'll come back to this. I think I'll see if I can get the calibrator to talk to me seperate from the LG software by replaying the initialisation and request messages I've captured.

I doubt I'll be able to work out what the initialisation messages do, but I may be able to work out a relation between colour/brightness and the values in the readings it produces.

Comments

Post a Comment!

Join the discussion

Log in or create an account, and start talking!

Hello World!

What This Thing Is

As I've mentioned in the project description, this calibrator is an LG product specifically for LG kit. Some background:

I bought one of the huge super-wide 21:9 screens a while back, which has some kind of hardware colour configuration thing where the mapping of RGB from computer to LCD is defined on something inside the monitor. There's a piece of software (used in conjunction with a hardware calibrator device) which automatically sets this config so that the output is a colour-calibrated image.

I wasn't really prepared to spend the hundred-odd quid needed for a calibrator device, and the defaults (once brightness was reduced from 'nuclear explosion' to something reasonable) were pretty decent, so I was happy enough.

At this point, I didn't know the LG calibrator existed, I found out about it on some forum thread I was reading. They'd made a device, compatible with their software, to calibrate their screens, and all for a grand total of £30. Tempting!

Colours!

I was thinking about getting one. I should state at this point that I dont NEED one, I don't do any colour work or anything with photos or design. Just that, it was an expensive monitor, which made it slightly more worth spending the extra cash to get things set up properly and get the most out of it.

Luckily found an amazon seller flogging them for a stupid low price, 5 quid or thereabouts, and for that much it would have been rude not to. Got it delivered, plugged it in, and installed the software. It looks like this:

Choose a brightness (optionally mess with the colour temperature a gamma curve) and hit the button. A square on the screen appears, stick the calibrator on top of it, and then it rotates through a bunch of different colours and shades so it can work out what is what and set up the screen properly. Nice!

It made a fairly reasonable modification to what it was set to originally. And, placebo or not, I think it is an improvement for daily use.

And Then...

Well, I have a TV too. That could look nicer if it were set up properly. There's another monitor on a different PC that could also do with the treatment.

I get to thinking:

  • The LG software works with a few different calibrator devices.
  • LG probably didn't create an entirely bespoke piece of hardware in their calibrator, it's surely some standard kit on the inside.
  • The other hardware calibrators probably throw their readings out as some fairly standard-ish thing, so that they can be used with different software.

So...

  • The LG calibrator probably talks something fairly sensible and not LG-specific.
  • Or, there's some way to convert what the calibrator talks to something sensible and not LG-specific.
  • Or, the LG software works using something LG specific, and it converts from what the other calibrators talk to that.

Upside: If my guesses are true (and I can figure out how), there's a reasonable chance this calibrator can be used as a general-purpose device.

Downside: I have no idea how calibrators work, what language they speak, or what I'm doing in general!

Comments

Post a Comment!

Join the discussion

Log in or create an account, and start talking!