Important: Use custom search function to get better results from our thousands of pages

Use " " for compulsory search eg:"electronics seminar" , use -" " for filter something eg: "electronics seminar" -"/tag/" (used for exclude results from tag pages)

Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
image compression techniques seminar report
Post: #1

I wants a complete seminar report on image compression.
my Email-id mukeshkumarroy432[at]
Post: #2
Image Compression Techniques

.docx  image compression report.docx (Size: 169.22 KB / Downloads: 70)


Compression means to convert a data of more memory storage to the lesser one.
Why Compression? It is useful to visit or re-visit the reasons why data compression is needed or useful.

One obvious reason is to save the cost of disk. While this may seem strange given that the disks are cheap. We all can buy 100s of GB disk for less than $100.00. So what all this fuss is about?

First reason, for the uninitiated, is that the disks used for high-end systems are not cheap.

Secondly, there is rarely a single copy of the production data. For example, if you are using high availability features like replication, log shipping or mirroring, you will have at least one more copy. Now what about test environment? There is one copy there as well. All these add up to the cost of hardware.

Third, how about backups? If you have many backups of your database over time, just multiply the cost. Agreed that backup can be stored on less expensive media but still there is some cost associated with it.

Second reason is the cost of managing the data. Larger the database, it takes longer to do the backup, recovery. Similarly for running DBCC commands, rebuilding indexes, and bulk import/export. Clearly if these commands are IO bound, then if we could reduce the size of the data, they will run faster. Even better, they will have less of an impact on the concurrent workload in your system. One interesting point is that if your database is compressed, the backup will be automatically smaller.
Statistical encoding is another important approach to lossless data reduction. This term sounds very complex, but a similar trick in information coding had already been used by the famous American inventor Samuel Morse more than 150 years ago for his electromagnetic telegraph. A frequently occurring letter such as ‘e’ is transmitted as a single dot ‘ . ’, while an infrequent ‘x’ requires four Morse symbols ‘ - . . - ’. In this way the mean data rate required to transmit an English text is decreased as compared to a solution where each letter of the alphabet is coded with the same number of basic symbols.
Accordingly in image transmission, short code words or bit sequences (one to four bits) will be used for frequently occurring small gray level differences (0, +1, -1, +2, -2 etc.), while long code words are used for the large differences (for instance the 212 in our example) with their very infrequent occurrence.
Statistical encoding can be especially successful if the gray level statistics of the images has already been changed by predictive coding. The overall result is redundancy reduction, that is reduction of the reiteration of the same bit patterns in the data. Of course, when reading the reduced image data, these processes can be performed in reverse order without any error and thus the original image is recovered. Lossless compression is therefore also called reversible compression.

Different File Types

For the purposes of this study, we’re only going to focus on three file types, those most commonly found in web design: PNG, JPEG, and GIF. While there are other image formats out there that take advantage of compression (TIFF, PCX, TGA, etc.), you’re unlikely to run across them in any kind of digital design work.


GIF stands for Graphics Interchange Format, and is a bitmap image format introduced in 1987 by CompuServe. It supports up to 8 bits per pixel, meaning that an image can have up to 256 distinct RGB colors. One of the biggest advantages to the GIF format is that it allows for animated images, something neither of the other formats mentioned here allow.


JPEG (Joint Photographic Experts Group) is an image format that uses lossy compression to create smaller file sizes. One of JPEG’s big advantages is that it allows the designer to fine-tune the amount of compression used. This results in better image quality when used correctly while also resulting in the smallest reasonable file size. Because JPEG uses lossy compression, images saved in this format are prone to “artifacting,” where you can see pixelization and strange halos around certain sections of an image. These are most common in areas of an image where there’s a sharp contrast between colors. Generally, the more contrast in an image, the higher quality the image needs to be saved at to result in a decent-looking final image

Choosing a File Format

Each of the file formats specified above are appropriate for different types of images. Choosing the proper format results in higher quality images and smaller file sizes. Choosing the wrong format means your images won’t be as high-quality as they could be and that their file sizes will likely be larger than necessary.
For simple graphics like logos or line drawings, GIF formats often work best. Because of GIF’s limited color palette, graphics with gradients or subtle color shifts often end up posterized. While this can be overcome to some extent by using dithering, it’s often better to use a different file format.
For photos or images with gradients where GIF is inappropriate, the JPEG format may be best suited. JPEG works great for photos with subtle shifts in color and without any sharp contrasts. In areas with a sharp contrast, it’s more likely there will be artifacts (a multi-colored halo around the area). Adjusting the compression level of your JPEGs before saving them can often result in a much higher quality image while maintaining a smaller file size.

Image Compression in Print Design

While the bulk of this article has focused on image compression in web design, it’s worth mentioning the effect compression can have in print design. For the most part, lossy image compression should be completely avoided in print design. Print graphics are much less forgiving of artifacting and low image quality than are on-screen graphics. Where a JPEG saved at medium quality might look just fine on your monitor, when printed out, even on an inkjet printer, the loss in quality is noticeable (as is the artifacting).
For print design, using file types with lossless compression is preferable.
TIFF (Tagged Image File Format) is often the preferred file format if compression is necessary, as it gives options for a number of lossless compression methods (including LZW mentioned above). Then again, depending on the image and where it will be printed, it’s often better to use a file type with no compression (such as an original application file). Talk to your printer about which they prefer.


In order to determine what information in an audio signal is perceptually irrelevant, most lossy compression algorithms use transforms such as the modified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain. Once transformed, typically into the frequency domain, component frequencies can be allocated bits according to how audible they are.
Audibility of spectral components is determined by first calculating a masking threshold, below which it is estimated that sounds will be beyond the limits of human perception.
The masking threshold is calculated using the absolute threshold of hearing and the principles of simultaneous masking—the phenomenon wherein a signal is masked by another signal separated by frequency, and, in some cases, temporal masking—where a signal is masked by another signal separated by time.
Equal-loudness contours may also be used to weight the perceptual importance of different components. Models of the human ear-brain combination incorporating such effects are often called psychoacoustic models.
Other types of lossy compressors, such as the linear predictive coding (LPC) used with speech, are source-based coders. These coders use a model of the sound's generator (such as the human vocal tract with LPC) to whiten the audio signal (i.e., flatten its spectrum) prior to quantization.
Post: #3

.ppt  IMAGE COMPRESSION.ppt (Size: 288 KB / Downloads: 38)


Image compression is the application of Data compression on digital images. In effect, the objective is to reduce redundancy of the image data in order to be able to store or transmit data in an efficient form.
Image compression can be lossy or lossless.
Lossless compression is sometimes preferred for artificial images such as technical drawings, icons or comics.
Lossy methods are especially suitable for natural images such as photos in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate.
Lossy compression methods, especially when used at low bit rates, introduce compression artifacts.
Methods for lossless image compression are: Run-length encoding, Entropy coding, Adaptive dictionary algorithms.
Methods for lossy compression: Chroma subsampling, Transform coding, Fractal compression.


Redundancy reduction aims at removing duplication from the signal source (image/video).
Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver, namely the Human Visual System (HVS).
In general, three types of redundancy can be identified:
Spatial Redundancy: correlation between neighboring pixel values
Spectral Redundancy: correlation between different color planes or spectral bands
Temporal Redundancy: correlation between adjacent frames in a sequence of images (in video applications


Lossless vs. Lossy compression: reconstructed image, after compression, is numerically identical to the original image. Lossless compression only achieves a modest amount of compression. An image reconstructed following lossy compression contains degradation relative to the original. The compression scheme completely discards redundant information. However, lossy schemes achieves much higher compression.
Predictive vs. Transform coding: In predictive coding, information already sent or available is used to predict future values, and the difference is coded. Since this is done in the image or spatial domain, it is relatively simple to implement and is readily adapted to local image characteristics. Transform coding first transforms the image from its spatial domain representation to a different type of representation using some well-known transform and then codes the transformed values (coefficients). Transform coding provides greater data compression compared to predictive methods, although at the expense of greater computation.


The DCT-based encoder is the compression of a stream of 8x8 blocks of image samples. Each 8x8 block makes its way through each processing step, and yields output in compressed form into the data stream. Because adjacent image pixels are highly correlated, the `forward' DCT (FDCT) processing step lays the foundation for achieving data compression by concentrating most of the signal in the lower spatial frequencies. For a typical 8x8 sample block from a typical source image, most of the spatial frequencies have zero or near-zero amplitude and need not be encoded.
DCT introduces no loss to the source image; it transforms them to a domain in which they can be more efficiently encoded.


Each of the 64 DCT coefficients is uniformly quantized in conjunction with a carefully designed 64-element Quantization Table QT. At the decoder, the quantized values are multiplied by the corresponding QT elements to recover the original unquantized values. After quantization, all of the quantized coefficients are ordered into the "zig-zag" sequence. This ordering helps to facilitate entropy encoding by placing low-frequency non-zero coefficients before high-frequency coefficients. The DC coefficient, which contains a significant fraction of the total image energy, is differentially encoded.


The list of values produced by scanning is entropy coded using a variable-length code (VLC).
Each VLC code word denotes a run of zeros followed by a non-zero coefficient of a particular level. VLC coding recognises that short runs of zeros are more likely than long ones and small coefficients are more likely than large ones. The VLC allocates code words which have different lengths depending upon the probability with which they are expected to occur.

Marked Categories : imege compression, report on image compression, seminar report on data compression, posterization algorithms techniques, seminor report on shift codes in image compression, image compression, medical image compression seminar report, seminar report on image compression techniques, image compression techiniques, complete report on image compression, image compression techniques, image compression seminar report, laetest seminor report on image compression, latest seminor report on image compression, latest seminar report on image compreesion, report seminar on image compression, image compression seminar document, seminar report on image compression,

Quick Reply
Type your reply to this message here.

Image Verification
Image Verification
(case insensitive)
Please enter the text within the image on the left in to the text box below. This process is used to prevent automated posts.

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  seminar topics on parasitology ppt Guest 1 0 19-02-2018 11:56 AM
Last Post: dhanabhagya
  Seminar Guest 0 0 18-02-2018 10:22 PM
Last Post: Guest
  report on bullock cart mechanism pdf Guest 1 0 07-02-2018 11:56 AM
Last Post: dhanabhagya
  report on bullock cart mechanism pdf Guest 3 0 01-02-2018 12:18 PM
Last Post: [email protected]
  banquet hall project report Guest 1 0 27-01-2018 11:15 AM
Last Post: dhanabhagya
  digital image watermarking project source code free download riya40 1 0 25-01-2018 01:13 PM
Last Post: dhanabhagya
  harnessing high altitude wind power full report computer science topics 8 17,167,460 25-01-2018 12:22 AM
Last Post: RobertAmund
  attending lectures on different geological techniques and geophysical techniques and subine 2 65 24-01-2018 12:34 AM
Last Post: RamiJer
  abstract of ieee seminar paper on bionic eye Guest 3 199 23-01-2018 09:20 PM
Last Post: SharonJer
  Mini project report on SHADOW ALARM project girl 6 11,060 20-01-2018 02:07 PM
Last Post: Austinfloro
This Page May Contain What is image compression techniques seminar report And Latest Information/News About image compression techniques seminar report,If Not ...Use Search to get more info about image compression techniques seminar report Or Ask Here