Our current computing environment prohibits extensive experimental evaluations of the method. The predictor updates turn out to be quite time consuming, which makes special neural net hardware recommendable. The limited software simulations presented in this section, however, will show that the ``neural'' compression technique can achieve ``excellent'' compression ratios. Here the term ``excellent'' is defined by a statement from [1]:
"In general, good algorithms can be expected to achieve an average compression ratio of 1.5, while excellent algorithms based upon sophisticated processing techniques will achieve an average compression ratio exceeding 2.0."Here the average compression ratio is the average ratio between the lengths of original and compressed files.
The method was applied to German newspaper articles. The results were compared to those obtained with standard encoding techniques provided by the operating system UNIX, namely ``pack'', ``compress'', and ``gzip''. The corresponding decoding algorithms are ``unpack'', ``uncompress'', and ``gunzip'', respectively. ``pack'' is based on Huffman-Coding (e.g. [1]), while ``compress'' and ``gzip'' are based on techniques developed by Lempel and Ziv (e.g. [9]). As the file size goes to infinity, Lempel-Ziv becomes asymptotically optimal in a certain information theoretic sense [8]. This does not necessarily mean, however, that Lempel-Ziv is optimal for finite file sizes.
The training set for the predictor was given by a
set of 40 articles from the newspaper Münchner Merkur,
each containing between 10000 and 20000 characters.
The alphabet consisted of
possible characters, including
upper case and lower case letters, digits, interpunction symbols, and
special German letters like ``ö'', ``ü'', ``ä''.
had 430 hidden units.
A ``true'' unit with constant activation
1.0 was connected to all hidden and output units.
The learning rate was 0.2.
The training phase consisted of 25 sweeps through the
training set.
The test set consisted of newspaper articles excluded from the training set, each containing between 10000 and 20000 characters. Table 1 lists the average compression ratios. The ``neural'' method outperformed the strongest conventional competitor, the UNIX ``gzip'' function based on a Lempel-Ziv algorithm.
|
How does a neural net trained on articles from Münchner Merkur perform on articles from other sources? Without retraining the neural predictor, we applied all competing methods to 10 articles from another German newspaper (the Frankenpost). The results are given in table 2.
|
The Frankenpost articles were harder to compress for all algorithms. But relative performance remained comparable.
Note that the time-window was quite small (
).
In general, larger time windows will make more information
available to the predictor. In turn, this will improve
the prediction quality and increase the compression ratio.
Therefore we expect to obtain even better results for
and for recurrent predictor networks.