Shannon-Fano Compression Explained and Demonstrated in Native PHP

Introduction

This article will explain how Shannon-Fano coding works. Named after after Claude Shannon and Robert Fano, apart from run length encoding, this is probably the simplest form of lossless compression. I also include some PHP code to demonstrate compression and decompression natively.

No deep knowledge of mathematics or compression is necessary, other than a basic knowledge of binary.

About Compression

Compression is Everywhere

Just about everybody uses data compression today, probably without realising it. The DVD or  Bluray that you watch, the MP3 you enjoy, and the digital TV that is now the only type in the UK, all use compression to reduce the size of the data that is stored or sent to you. The images on web pages, and sometimes even the web pages, are also compressed.

Lossy Compression

The compression algorithms used by DVDs, Blurays, MP3s and some computer images (such as JPG files) use a form of compression called lossy simply because it does not reproduce he original data perfectly, it acheives a greater level of compression by removing the parts of the video, picture or sound that are not needed to enjoy the experience.

Lossless Compression

Lossless compression is the counterpart to lossy – the original data is returned unchanged when the compressed data is uncompressed. If you extract files from a ZIP or RAR file, these are returned to you exactly as they were before they were added to the ZIP or RAR file.

This article, and the article to follow, deal only with lossless compression.

Compressing a Short Text Example

Let’s start with a simple example. We will compress the word TATTOO. If we consider only a simple character string for the moment, TATTOO requires 6 bytes of storage.

This article will show how TATTOO can be compressed into just 2 bytes using Shannon-Fano encoding. The detailed explanation of Shannon-Fano is below, but you don’t need it yet.

When a computer reads TATTOO from the 6 bytes of storage, it simply reads from the sequential bytes; each byte contains 8 bits of binary data.

The binary representation of TATTOO is:-

01010100 01000001 01010100 01010100 01001111 01001111

..that is, 6 bytes or 48 binary bits.

If we use Shannon-Fano to encode TATTOO, it is reduced to just:-

10011010 1

…or just nine binary bits. The compressed data requires two bytes of storage (it almost fits into one).

How is it done? Here’s how.

Bits into Characters

The 9 bits 100110101 comprise the word TATTOO but unlike the uncompressed data, 8 bits does not represent a single character; the number of bits that represent each character is variable. That’s how it compresses TATTOO into just 9 bits.

You will see shortly that the 9 bit compressed data 100110101 comprises these bits to store the characters:-

BITS Letter
1 T
00 A
1 T
1 T
01 O
01 O

How does the decompression software “know” how many bits make up each character? How does it “know”, for example, that the first bit represents ‘T’, and the next 2 comprise ‘A’, as revealed in the table above? It uses a binary tree.

Binary Tree

The decompression software is supplied with a binary tree which it uses to decode the bitstream that is the compressed data.

The decoder traverses the tree for once for each compressed element (character, in this instance). This is a simple and therefore fast operation for a computer to execute.

A binary tree can be visualised by reference to the illustration below, showing the options to decode the first 3 binary digits of an encoded element.

The tree is traversed by taking the left branch at each node if the current binary digit is 1, and the right branch if it’s zero.

Using A Binary Tree to Decompress

Now we will use the actual binary tree which will be used to decompress the 9 bit compressed data sample. To decompress that data (100110101) we follow these steps. Please refer to the illustration below.

  1. Start at the Root, at the top.
  2. Read each compressed binary digit from left to right. The first digit is 1. If a digit is 1, take the left fork, and if it’s 0 take the right.
  3. Moving from Root we thus branch left because the first digit is 1.
  4. If there are no futher branches possible from our new position, and there is data at that node (a leaf node), use the letter that is at the new position, as the first decoded character. It’s T. Note that the illustration below also shows the binary bits that were used to decode this character, in this case just 1.
  5. After each character is decoded move back to Root. Read the next binary digit, which is 0. Take the right branch this time, from Root. If there are no further branches at the new position, use the letter; however this time, there are further branches.
  6. Read the next binary digit, 0. Again, move left for 1, or right for 0, so move right. At the new position you will see the numbers 00 which represents the bits you have used so far to decode this letter. We’ve arrived at a node which has no further branches but has a letter. We now have the second letter, A.
  7. Return to Root to decode the third character. The next bit is 1. Move left from Root to reach a node with no further branches and the letter T. We have character number 3.
  8. Repeat again,and once again we use the next bit which is 1; the movement down the tree gets the fourth character, T.
  9. The next bit is 0. Take the right branch from Root and then use the next bit, which is 1, and then take the next right fork to arrive the fifth letter, O.
  10. The final bits are 0 and 1, and you may have spotted that this is identical to the previous character; the sixth and final decoded character is O.
  11. We now have our complete decoded word, TATTOO, 48 bits extracted from just 9!

A Complete Compressed Message

When the source data is compressed, the software assigns the characters to be encoded, to the tree nodes, attempting to create a tree that will yield the most efficient compression of the source data.

We have already seen an example of a compressed bitstream; the complete compressed data includes:-

  1. The length of the uncompressed data
  2. Data telling how to construct the binary tree that was used to compress the data
  3. The compressed data

The items 1 and 2 are a necessary overhead that must accompany the compressed data.

Creating the Binary Tree using Shannon-Fano

The Shannon-Fano algorithm used to create the binary tree to compress and decompress is very simple.

  1. Create a empty binary tree. Set the current position to the root
  2. Create a frequency table for all elements present in the source data
  3. Sort the table by frequency so that the most common element is at the start
  4. Split the table so that the total frequencies in both parts are as close as can be. The most common symbols are in the “left” portion, the least in the “right”. You now have two parts.
  5. Work on each part. Split the part so that the total frequencies in both parts are as close as can be.
  6. Repeat 5 until the part has 2 or less symbols.
  7. Assign digits for each part; the left portion is assign 1, the right is assigned 0
  8. Repeat for all parts.
Symbol Frequency>
T 3
O 2
A 1

In this example, we can clearly bisect the symbols by frequency; the most common (T) has a total of 3. After we have divided, the common portion only has one symbol (T), so we add it to the empty tree. This leaves O and A in the remaining section, so these are added to the tree.

Compressing Data using the Binary Tree

Compression is the reverse of the decompression process explained earlier. Using a binary tree created as above, do this for each character in the uncompressed text:-

  1. Find the leaf node for the current character.
  2. Work from that node up to the root. Repeat 2 until you are at the root (recursively).
  3. Add a 1 to the final output if your move up was from the left branch, 0 if from the right. Repeat 3 until all calls at 2 are done.

There is, However, One Small Problem

Shannon-Fano is not very good. The algorithm to assign the bits to symbols does not produce the best compression results. Shannon-Fano is generally not used now; Huffman coding and other methods have replaced it.

Using Shannon-Fano (Regardless) for PHP

If you would like to use Shannon-Fano in PHP, we have prepared a PHP class which compresses and decompresses text in memory. It was created as a means to demonstrate Shannon-Fano, but it could be utilised.

Typical uses could be:-

  • Storing large text in a database BLOB in a compressed form.
  • Compressing binary data.

If you don’t wish to use PHP compression libraries, or are unable to do so, or if you are interested in compression, consider using the class.

Our PHP Class Shannon.php

The class has just four public functions:-

  • compressText which as its name implies, compresses text, producing a byte array.
  • expandText which expands a byte array that was previously compressed from text
  • compressBin which compresses a byte array, producing another byte array.
  • expandBin which expands a byte array that was previously compressed from a byte array

The code snippet below demonstrates simply the use of the class:-


<?php

require('Shannon.php');

$instance = new Shannon();

$text = "More ending in death, but this time it sounds like a ";
$text.= "solace after life. I lingered round them, under that ";
$text.= "benign sky; watched the moths fluttering among the ";
$text.= "heath, and hare-bells; listened to the soft wind ";
$text.= "breathing through the grass; and wondered how any one ";
$text.= "could ever imagine unquiet slumbers ";
$text.= "for the sleepers in that quiet earth.";

echo "text len=".strlen($text)." characters\n";

$enc_ar = $instance -> compressText($text);

echo "encoded len=".count($enc_ar)." bytes\n";

$org_text = $instance -> expandText($enc_ar);

if(strcmp($org_text,$text)==0)
{
    echo "decoded text matches\n";
}
else
{
    echo "decoded text DOES NOT match\n";
}

?>

The above text (the end of Wuthering Heights) comprises 333 characters. The resulting compressed byte array is 227 bytes in length

The PHP class is available to download here.

Beyond Shanon-Fano

A forthcoming blog post will explain and demonstrate Huffman coding, a similar but more efficient method.

  • compressText which as its name implies, compresses text, producing a byte array.
  • expandText which expands a byte array that was previously compressed.

Sencha + PhoneGap for Android tutorial!

I just spent quite a while trying to get Sencha + PhoneGap to play nice and install the demo app on my Android phone, so here are the installation steps for anyone else unfortunate enough to be as clueless as me when it comes to these things!

1. Download Sencha Touch (http://www.sencha.com/products/touch/) and extract contents to c:\wamp\www\ (or whatever web server you are using’s www directory)

2. Download and install Sencha CMD (http://www.sencha.com/products/sencha-cmd/download)

3. Open a command prompt at C:\wamp\www\sencha-touch-2.3.1a-commercial\touch-2.3.1

4. Run “sencha generate app YourAppName ../../YourAppFolder”

5. CD to ../../YourAppFolder

6. Run “sencha phonegap init”

7. Edit YourAppFolder/phonegap.local.properties and change:

phonegap.platform=ios

to

phonegap.platform=android

8. Connect your android device via USB and enable USB debugging on the device.

9. Run “sencha app build -run native”

10. Done! You should see the demo app running on your device 🙂

Any questions – ask away!

Visual C++ Runtime and Static Linking Made Simple

Introduction

This is a new explanation of an old topic, hoping to answer developer and user questions about the use of the Visual C++ Runtime component by Windows applications, and indeed the non-use of the component by those applications that link to Windows “statically”.

What is the Runtime C++ Component?

It’s analogous to the .NET framework or the Java runtime. It provides an environment in which C++ applications created with Visual C++ (within Visual Studio) are able to run on a PC. The applications hook into the runtime component to connect to Windows rather than carrying the code within themselves.

The component is usually installed or updated if needed, by applications that require it, but it can be downloaded at no cost from Microsoft.

Linking to the Runtime C++ component is also known as linking dynamically. Your application will use the relevant DLL which will be loaded only once, into memory, and is shared by all applications running on the computer (the code itself is shared; each instance using it enjoys its own memory space, with its own stack etc).

Advantages of Dynamic Linking

  • The application executable is significantly smaller
  • Only one copy of the relevant code is present on the machine at runtime, and can be shared by multiple applications
  • Security and other fixes are applied to the runtime with no need to update applications in order to apply the same fixes

Advantages of Static Linking

  • Simpler deployment and installation – no need to install or update the C++ runtime
  • Startup can be faster

Linking to Windows in Visual Studio

Many articles on this subject include the names of the various libraries that are used in different configurations, and the command line switches. These are not included here; we instead show the project settings to be made from within Visual Studio.

All screenshots are from Visual Studio 2008.

A Simple Demonstration Project

In order to demonstrate static an dynamic linking, I created a simple library (testlib.lib), not a DLL, which will be linked into our main program. This is the library within our solution:-

The main program is a simple Windows 32 console application (testapp.exe) that uses the above library. The application and the library form the entire solution:-

Debug or Release

When working in a debug configuration, the linking method is not normally important, provided that each project within the solution is linked to Windows in the same manner. I normally choose the default: linking dyamically to Windows.

The examples illustrated below all refer to a release configuration.

How to Link a Project Dynamically to Windows

Each project within the solution must be linked to Windows in the same manner.

In the solution explorer pane, right click on a project, then click on properties:-

The default linking method in Visual Studio 2008 (and earlier) is dynamically,which is here described as Multi-threaded DLL.

Repeat for all projects in your solution.

How to Link a Project Statically to Windows

Follow the steps illustrated below, but select Multi-threaded.

Repeat for all projects in your solution.

Linking an MFC Project

If your project uses MFC, in order to change the linking method, go tothe project property pages, general section. Set the Use of MFC item to static or dynamic:-

Understanding Linking Errors

Linking errors can be confusing, and harder to understand than comilation problems.

If the linker complains that items are already defined in LIBCMT or that something is already defined in msvcrt.lib your first action should be to verify that all projects within your solution are linked in the same manner.

Excluding Libraries

Avoid if possible.

Normally you should never need to exclude all, or specific libraries, unless you are linking to a third-party library, and that in itself can cause problems if there are conflicts.

If, for example, you are linking your prioject statically to Windows but wish to link to a third party static library (not a DLL) which has been compiled to link dynamically to Windows, you will see conflicts which can be removed by excluding a library, but this is not recommended.

Easy circular images with CSS!

Circular images are all the rage on the world wide web at the moment. Here’s how you do it.

First – It’s not essential, but this will work much better with images that are already square-shaped, so maybe do that. Nothing will fall into tiny little microscopic pieces of failure if you don’t though.

Second – the HTML.

<div class="img-circular"><img src="duck.jpg" alt="duck" /></div>

Yes, we COULD just use a single DIV and set its background image, to cut down on a bit of HTML, but if you’re pulling your images from a database, and chances are you might be – this method will cause you much less trouble!

Third (and last, actually) – the CSS.

.img-circular {
     width: 417px;
     height: 417px;
     border-radius: 50%;
     overflow: hidden;
}

Awesome.

Gapless Digital Audio Playback – One Solution

Mind That Gap

This has post has nothing whatsoever to do with developments in our T2A API, other than in that our developers like listening to digital music.

One long-recognised problem with the listening experience of an album which has been encoded as multiple mp3 files (or wma, aac etc) is that of unwanted gaps. If your preference is for a collection of separate songs, this does not affect you, but listeners of music collections where tracks segue into each other, would not want there to be any gap when listening to that album as a collection of digital audio files.

Why Gaps are Heard

The main reason is that when an mp3 or some other format lossless compression audio file is created, a short silence is created at the start and end of the track. Some audio formats include information to allow playing hardware to compensate for this, but mp3 does not.

Solving with Hardware

Some newer equipment is able to achieve gapless playback of multiple tracks, either by using  a crossfade, or by using information embedded in the audio file to allow it to compensate for any gaps at the start and end of each track, where the audio file format includes that information.

If your mp3 playing device leaves gaps in audio that should be gapless, read on.

Solving with Software – Create An Audio Book

Introduction

A cumbersome but otherwise completely sucessful method is now demonstrated.

By creating an audio book containing a single file with no gaps but with chapters to denote the positions of the former individual tracks, we will achieve gapless playback, provided that we have a player that supports the chosen format, and supports the selection of chapters for playback.

Audio Book Format

For this demonstration we created an .m4b file. This is actually idential to an .m4a, which is an mpeg-4 audio file using the AAC codec. The m4b extension was created so that Apple’s iTunes software and iPod players can recognize the file as an audio book rather than a normal audio track and thus allow “bookmarking” the file

An m4b is thus best suited, as one might expect, to Apple devices and Apple software on other devices, but other software support for m4b files with chapters does exist for other devices.

We looked briefly at other formats; .wma and .aac support chapters, as does .mp3 with a later id3v2 addition. Support for these formats is poor both in terms of encoding software and hardware compatibility.

Step by Step

We chose for this demonstration a well known album in which some tracks should have no gaps; “The Dark Side of the Moon” by Pink Floyd opens with 3 gapless tracks.

Below is a screenshot of the audio book that we created, playing in Apple’sQuicktime, on a PC.

Note the chapters (which includes 2 extra ones to represent positions within original tracks).

Prepare The Full File

  1. Rip your copy of the CD to a lossless format, such as WAV. This should ensure that there are no unwanted gaps at the start or end of each track. The perfect reproduction of the CD will allow compression to the final format file; recompression of a compressed file should not be executed.
  2. Use a suitable editor to join the lossless tracks together.
  3. Play the joined tracks file, ensure there are no gaps.

You now have a single lossless file, which if it is a stereo 44k PCM audio file is about 650Mb for an hours worth of music.

Choose an Encoder

We used the multiformat video and audio encoder XMedia Recode is a free application. It supports the .m4a format with chapters. We created an m4a and then simply renamed the finished file to m4b.

One alternative means to add the chapters is mp4box which we have not tested.

Create the m4a

Load the complete file. Specify chapters by start and end time, and your chosen chapter name. You may wish, as we did with our Pink Floyd demonstration, insert some extra chapters to allow navigation to a point within a track.

Select a suitable quality for the m4a encoding; the default quality is 128Kkbps but we doubled the bandwidth to 256kbps for our m4a.

XMedia Recode will also allow you to specify title, year and other information about the audiobook.

Rename to m4b

As we have seen, m4a files are idential to m4b. When the m4a encoding is completed. rename the extension to .m4b.

The m4b file is now complete.

A short clip (27 seconds, approx 700Kb) from our m4b audiobook is available here to download. This comprises the 15 seconds before our extra chapter marking the “Breathe (reprise)” section of “Time”, and the first 12 seconds of this section.

Play with Quicktime

If you have Apple’s Quicktime application on your PC, play the m4b using QT; you should see and be able to select from the named chapters, and thus be able to select your track of choice.

Using in iTunes and with Apple Devices

Using iTunes in order to upload the m4b to an iPhone, iPod or iPad, you will note that the m4b does not appear in the “music” section – look in the “books” section. Drag it over to your connected device.

We tested our m4b on an iPhone 4S and an iPad. It works especially well on the former, allowing easy navigation to the chapters / tracks.

iTunes also facilitates the easy addition of album artwork to the m4b file.

Using with Android

Users of Android devices may choose to use a free app, The Akimbo Audiobook Player which supports m4b files with chapters. This has been tested with our m4b on a Sony Experia Z1.

Conclusion

This is an effective but quite cumbersome means to achieve gapless playback.

If you’re fed up with annoying gaps, have equipment that supports an audiobook format with chapters, and are dedicated enough, this approach may work for you.

Working with ISO-8859-1 and Unicode Character Sets

Introduction

This article gives a brief and not too technical explanation of character encoding, and of the titular character encoding methods. I also outline how to work with the methods, how to fix some common problems and how to choose which encoding system to use.

Why is Character Encoding Important?

If a web developer includes an image in some HTML markup, he/she does not have to specify in what fomat the media was saved – the browser rendering engine will interpret that using a signature in the media file; similarly a media player will interpret a video file to discover which format the file is in.

Unfortunately character strings have no signature that allows the processing engine to automatically determine the format of the character encoding, in situations where multiple formats may be encountered, such as a web page, or if .NET or Java process external text files. The developer needs to inform the relevant engine what the character encoding format is.

When Character Encoding Goes Bad

This is a common sight on web pages:-

The price is �100 or about �120

… or the same text showing a different error:-

The price is £100 or about €120

The correct text should displayed as:-

The price is £100 or about €120

See below for a detailed explaination of the problem and the solution.

What is Character Encoding?

Character encoding is the means by which the characters are stored in a sequence (or stream) of bytes.

One Byte Per Character

The simplest format is the use of a single byte for a character giving 255 possible characters, 0 is usually the terminating character..This is sufficient to display most characters in most western languages, or most characters in any given language.

Two Bytes Per Character

If you have ever programmed in Java or .NET, you will almost certainly have encountered 2 byte (or 16 bit) character encoding, since strings are handled internally in this format. This allows the representation of 65535 characters which may initially seem to be sufficient to represent every possible character in written worldwide culture, and it usually is, but not always.

Unicode

Unicode simplifies things by allowing any character to be displayed within a single and huge character encoding system, which includes thousands of characters, more than can be represented by a 16 bit character encoding.

It also provides a more space efficient format than the aforementioned 16 bit encoding scheme, the popular UTF-8, and you may also encounter  UTF-16 or even UTF-32.

For a more detailed explaination of Unicode see our earlier blog on the subject.

ISO-8859-1 Encoding

ISO-8859-1 is actually a subset of Unicode. It comprises the first 255 Unicode characters (see below for the full character set) and is also sometimes known as Latin-1 since it features most of the characters that are used by Western European languages.

(The developer should be aware that the first 127 characters are encoded identically in ISO-8859-1 and UTF-8, as a single byte).

Many web pages created by English and other Western European language speakers are still encoded in ISO-8859-1, since this is sufficient to represent any possible character that they wish to display.

ISO-8859-1 vs UTF-8

When faced with the choice of character encoding, the choice is between flexibility and storage space and simplicity.

If only ISO-8859-1 characters are to be used in a project (such as a website), then ISO-8859-1 does offer a slight benefit in terms of storage space, and therefore in the case of a web page, of download size.

Fans of the Swedish/Danish TV show The Bridge will be familiar with the events contained in this sample string:-

Saga Norén leaves Malmö and crosses the Øresund Bridge

The text above comprises 54 characters. All the characters are present within the ISO-8859-1 character set, and so the string can be stored as 54 bytes using a simple one character per byte encoding.

If however the string is stored in UTF-8, it requires 57 bytes. This is because the three non-English characters (which are outside of the lower 1-127 range) are stored in two bytes using UTF-8. There is thus a slight space advantage.

I would nevertheless choose UTF-8 to give flexibility to show any possible future characters. Unicode wins.

Web Page Character Encoding Errors Explained

Remember the incorrectly displayed web page text shown above?

Error 1 was:-

The price is �100 or about �120

Error 2 was:-

The price is £100 or about €120

What has gone wrong? Well, the first example shows what happens when text that has been encoded as ISO-8859-1 is displayed on a web page which has told the viewing web browser that the contents are encoded as UTF-8.

The characters £ and are outside of the lower range (1-127) and are therefore encoded differently in UTF-8 and ISO-8859-1.

The second example shows the opposite; text encoded as UTF-8 is displayed in a page which has informed the web browser that the contents are encoded in ISO-8859-1.

Put simply, the web page encoding information does not match the contents, and horrid errors are shown.

In order to display this correct text…

The price is £100 or about €120

.. the simple solution to both problems is to establish which encoding should be used, and then within the

<head>

…of an HTML 4 or earlier page, use

<meta http-equiv="content-type" content="text/html;
charset=utf-8" />

…to specify UTF-8 contents or

<meta http-equiv="content-type" content="text/html;
charset=iso-8859-1" />

…if the contents are ISO-8859-1.

For HTML 5 specifying the character set is simpler:-

<meta charset="utf-8" />

The above code fragments are suitable for flat HTML pages; PHP programmers would use

header("Content-Type: text/html;charset=utf-8");

and a JSP page would use

<%@ page contentType="text/html;charset=UTF-8" %>

…to show just a couple of common examples.

Working with Text Files

A simple text file, as we have seen, carries no header or signature to indicate in what encoding format the text was saved. The programmer should determine that encoding format carefully.

For example, to read an ISO-8859-1 text file containing our 54 character sentence above, in C#, you would:-

StreamReader tr = null;

try
{
   tr = new StreamReader("saga.txt",
                          Encoding.GetEncoding("iso-8859-1"));
   String testline = tr.ReadLine();
}
catch
{
}
finally
{
   tr.Close();

}

The above code will ensure that the non-English characters are read correctly into the .NET String class instance.

Reference: The ISO-8859-1 Character Set

These are the displayable characters in the ISO-8859-1 character set, with their Hexadecimal values. Characters 0x20 (space) to 0xff are shown.

Character Hex Character Hex Character Hex Character Hex
20 ! 21 22 # 23
$ 24 % 25 & 26 27
( 28 ) 29 * 2A + 2B
, 2C 2D . 2E / 2F
0 30 1 31 2 32 3 33
4 34 5 35 6 36 7 37
8 38 9 39 : 3A ; 3B
< 3C = 3D > 3E ? 3F
@ 40 A 41 B 42 C 43
D 44 E 45 F 46 G 47
H 48 I 49 J 4A K 4B
L 4C M 4D N 4E O 4F
P 50 Q 51 R 52 S 53
T 54 U 55 V 56 W 57
X 58 Y 59 Z 5A [ 5B
\ 5C ] 5D ^ 5E _ 5F
` 60 a 61 b 62 c 63
d 64 e 65 f 66 g 67
h 68 i 69 j 6A k 6B
l 6C m 6D n 6E o 6F
p 70 q 71 r 72 s 73
t 74 u 75 v 76 w 77
x 78 y 79 z 7A { 7B
| 7C } 7D ~ 7E  7F
80  81 82 ƒ 83
84 85 86 87
ˆ 88 89 Š 8A 8B
Œ 8C  8D Ž 8E  8F
 90 91 92 93
94 95 96 97
˜ 98 99 š 9A 9B
œ 9C  9D ž 9E Ÿ 9F
A0 ¡ A1 ¢ A2 £ A3
¤ A4 ¥ A5 ¦ A6 § A7
¨ A8 © A9 ª AA « AB
¬ AC ­ AD ® AE ¯ AF
° B0 ± B1 ² B2 ³ B3
´ B4 µ B5 B6 · B7
¸ B8 ¹ B9 º BA » BB
¼ BC ½ BD ¾ BE ¿ BF
À C0 Á C1 Â C2 Ã C3
Ä C4 Å C5 Æ C6 Ç C7
È C8 É C9 Ê CA Ë CB
Ì CC Í CD Î CE Ï CF
Ð D0 Ñ D1 Ò D2 Ó D3
Ô D4 Õ D5 Ö D6 × D7
Ø D8 Ù D9 Ú DA Û DB
Ü DC Ý DD Þ DE ß DF
à E0 á E1 â E2 ã E3
ä E4 å E5 æ E6 ç E7
è E8 é E9 ê EA ë EB
ì EC í ED î EE ï EF
ð F0 ñ F1 ò F2 ó F3
ô F4 õ F5 ö F6 ÷ F7
ø F8 ù F9 ú FA û FB
ü FC ý FD þ FE ÿ FF