Information Hiding In JPEG Images Computer Science Essay

Since the constitution of the Internet, information security has ever been the primary concern in digital transmittal. In fact, every service in the Internet assures its clients of their privateness, security and confidentiality. However, along with the promotion of attacks turn toing these issues, are besides the development of programming malpractices. Therefore, Internet minutess are ne’er truly unafraid and are really ever hazardous ; particularly those affecting extremely sensitive informations like in telecommunication, concern and banking. Consequently, assorted researches have been established in the hope of deciding these issues. Data encoding and information concealment are merely two of the many methods turn toing information security.

Cryptography, a sub-discipline of informations encoding, is the art and scientific discipline of composing in secret codification. Unfortunately, coding the contents of a secret message is sometimes non plenty, it may besides be necessary to hide its being every bit good. The technique used to implement this indispensable secretiveness is information concealing. Steganography, digital watermarking and fingerprinting trades with information concealment. Basically, the differences of these information concealment methods are based on the intent of the concealed informations [ 1 ] . This survey, nevertheless, is focused merely on cryptography.

Steganography, which means ‘covered or conceal composing ‘ , surveies the encryption and the sensing of concealed information in digital communicating transmittals. Steganographic methods hide the being of an arbitrary digital message by encoding it into suited bearers such as images, audio files, text files, pictures and signals, therefore doing it hard for a possible research worker to detect [ 2 ] . Among assorted bearer options, the range of this survey is narrowed to image cryptography, peculiarly JPEG images.

JPEG images are the most common topics to image cryptography. It is non merely for the fact that it is the most common image file format used in the Internet, but besides because alterations of its standard JPEG tabular arraies can be optimized to increase concealment capacity. This construct is the foundation of the plants of Chang et Al. [ 3 ] and Wang et Al. [ 4 ] , which in bend is the footing of the survey of Caandi et Al. [ 5 ] , where this survey is adapted from. Furthermore, in order to execute Optimized Least Significant Bit Substitution ( OLSB ) , the attack used in this survey to image cryptography, the Harmony Search Algorithm ( HSA ) is utilised to happen the optimum permutation matrix to be used in the encryption procedure.

Harmony hunt is a music-based metaheuristic optimisation algorithm, introduced by Geem et Al. in 2001. Since so, it has been proven utile to many optimisation jobs including map optimisation, technology optimisation, design of H2O distribution webs, groundwater modeling, energy-saving despatch, truss design, vehicle routing, and others [ 6 ] . This survey so intends to seek its efficiency in image cryptography utilizing the construct of Optimal Least Significant Bit.

Image

An image is a planar image that resembles the visual aspect of the topic as it illustrates the similitude seen or produced from something or perceptual experience of person [ 10 ] . It is composed of an array of Numberss, runing from 0 to 255, that represent light strengths at assorted points. These points, referred to as image elements or pels, make up the raster informations of the image. Furthermore, each pel uses 3 bytes to stand for the primary colourss: ruddy, green, and blue ( RGB ) , which is the basic colour format used in images. In consequence, these pels contribute to the size of the image file. Furthermore, since this survey focuses on digital images, compaction of the image would be good for size is a major factor in conveying files [ 11 ] .

By and large talking, digital images come in two types. These are raster ( besides known as electronic image ) and vector images. Raster images are composed of a matrix ( grid ) or electronic image of digital image elements ( pels ) . The lone limitation when working with electronic images is the declaration of the screening device. While make fulling information at pel degree, in upsizing or downscaling a pel, the computing machine has to extrapolate ( enlarging ) , or acquire rid of information ( cut downing ) , necessarily damaging the concluding consequence. Vectors, on the other manus, have the wholly reverse position. The image is non a merchandise of “ make fulling ” the pels with information, it is instead the package ‘s reading of a clump of mathematical computations. As it is resolution independent, pels are unhurt ; rescaling a vector merely requires alteration of parametric quantities for mathematical locations. The package instead distributes the image on whatever size is indicated. Hence, no re-sampling takes portion and the quality is ever the same. No info is interpolated or lost in the procedure [ 14, 15 ] .

Bitmap image enlarged to see the single pels.

Although it is inherently non supported, electronic images can transport other information like alpha channels, Z-Depth and ID ‘s, giving the images particular belongingss as transparence and 3D filtering. On the other manus, vector images are non confined to the square infinite of their canvas, as they have unconditioned transparence belongingss.

Vectors have built-in transparence belongingss. Bitmaps need excess information.

Vector images are unpopular since it is less photorealistic than raster images. Nevertheless, electronic images used with proficient carelessness are like bad juju for printing work flows. The perfect illustration of a vector is a fount. A 6px fount has the same acuteness and quality as a 600px fount. On the other manus, the perfect illustration of a electronic image is a high colour deepness exposure.

Figure 1 Beautiful shooting by Gregory Hugh Davidson

There is non a individual bid in the codification of vector images that gives the power to state the package toA precisely, positively make full the displayed pels on the proctor. The power of vector artworks is that resizing images needs no re-sampling as it merely makes a rendition of an equation over a given grid of pels. This versatility and power is surprisingly what makes vectors defect when scaled down to such highly little sizes as 24 or 16 pels, like icons. The same regulation applies to founts. Vector founts are indispensable in 99 % of in writing work, but for 9px founts and below, bitmapped founts would be a better option.

Planar images are represented as 1s and nothing, the cardinal linguistic communication of a computing machine called binary. In this survey, this representation of an image is an of import factor in implanting while other features of a vector or a bitmap image such as size, colour, pel and declaration contribute to the quality of the ensuing image after implanting.

Image Analysis

The construct of image analysis is widely used in machine or computing machine vision. It is employed in derivation of meaningful information by analysing the information that comprise an image, such as contour lengths, countries, form, size distribution, etc ; chiefly from digital images by agencies of digital image processing techniques [ 16 ] . It can be every bit simple as saloon codification reading or every bit sophisticated as face acknowledgment. Furthermore, analysing images taken from medical setup is a important map of image analysis tools.

Image Compaction

Image compaction is minimising the size in bytes of a artworks file without degrading the quality of the image to an unacceptable degree. The decrease in file size allows more images to be stored in a given sum of disc or memory infinite. It besides reduces the clip required for images to be sent over the Internet or downloaded from Web pages [ 17 ] . Compressing an image is significantly different than compacting natural binary informations. General purpose compaction plans can be used to compact images, nevertheless, consequence is less than optimum. This is because images have certain statistical belongingss which can be exploited by encoders specifically designed for them. Besides, some of the finer inside informations in the image can be sacrificed for the interest of salvaging a little more bandwidth or storage infinite. This besides means that lossy compaction techniques can be used in this country [ 18 ] .A

Lossless and Lossy Image Compression

Lossless and lossy compaction are two image compaction types that describe whether or non, in the compaction of a file, all original informations can be recovered when the file is uncompressed. With lossless compaction, every individual spot of informations that was originally in the file remains after the file is uncompressed. All of the information is wholly restored. This is by and large the technique of pick for text or spreadsheet files, where losing words or fiscal informations could present a job. The Graphics Interchange File ( GIF ) is an image format used on the Web that provides lossless compaction.

Figure 2 Illustration of compaction consequence on file size.

Figure 3 Illustration of JPEG compaction. On the left, the original image ( 270 kilobit ) .

Then the compressed file in the right ( 220 kilobit ) . [ 5 ]

On the other manus, lossy compaction reduces a file by for good extinguishing certain information, particularly excess information. When the file is uncompressed, merely a portion of the original information is still there ( although the user may non detect it ) . Lossy compaction is by and large used for picture and sound, where a certain sum of information loss will non be detected by most users. The JPEG image file, normally used for exposure and other composite still images on the Web, is an image that has lossy compaction. Using JPEG compaction, the Godhead can make up one’s mind how much loss to present and do a tradeoff between file size and image quality [ 19 ] .

Image Processing

Image processing involves signal processing of a planar image as the input of a system. The end product of image processing may either be a modified image or, a set of features or parametric quantities related to the image. It involves many transmutations such as expansion, size decrease, additive interlingual rendition and rotary motion. This procedure is from handling the image as a planar signal to using standard signal-processing techniques to it. Standard techniques involve merely gauging the losing pels based on the colour of the nearest known pels. More sophisticated techniques may affect utilizing algorithms to judge the missing pels normally by factoring in the comparative colourss of all environing pels. Techniques to aline images are besides rather straightforward [ 20 ] .

Image processing has big part in the success of execution of computing machine vision, characteristic sensing and augmented world. Possibly the one most familiar application to most of us is in security and surveillance applications. It has besides extremely contributed to the field of medical specialty. Medical image processing and medical image processing are merely the few of them. Furthermore, as we all know, we are in ownership of a wealth of excess informations on the surfaces of planets but need to utilize image processing to foreground countries of involvement for farther survey. Hence, a 3rd country of significance of image processing is covering with images obtained through distant imagination such as orbiters through which scientists extends efficiency and capableness to judge the presence of craters, dirt and atmospheric features [ 20 ] . However, with modern digital engineering, the demand of image processing in diverse application countries is continuously turning like in the field of multimedia computer science, secured image informations communicating, biomedical imagination, biometries, remote feeling, compaction and the similar.

Color Space

Color infinite is an abstract theoretical account used in the method of stipulating, making and visualising colour. It has three chief properties, that is, impregnation, brightness and chromaticity. These properties indicate a place within the colour infinite, which in bend identifies a colour, and that is more likely a transition or combination of colourss. There are six different colour infinites at the present that are used in image and picture processing: RGB ( Red Green Blue ) , CMY ( K ) ( Cyan Magenta Yellow ( Black ) ) , HSL ( Hue Saturation and Lightness ) , YUV, YCbCr and YPbPr. Among these colour infinites, RGB, YCbCr and YUV have been utile in compaction applications [ 5, 21 ] .

For every application, a certain colour spaces suits it best, for illustration some equipment has restricting factors that dictate the size and type of colour infinite that can be used. Some colour infinites as device dependant while others are every bit valid on whatever device they are used. Furthermore, most computing machine artworks colour infinites are perceptually non-linear, which makes it inefficient for coding colour information as some countries will enable look at excessively high a preciseness, while others at non plenty. Another job is that many can be unintuitive in usage is that specification of a coveted colour can be hard for the inexperient oculus, like separating brown in an RGB vector ) [ 21 ] .

RGB

RGB is often used in most computing machine applications since no transform is required to expose information on the screen. For this ground it is normally the base colour infinite for most applications. This is the colour infinite produced on CRT shows and similar proctors when pel values are applied to a artworks card. RGB infinite may be visualized as a regular hexahedron with the three axes matching to red, green and bluish. The bottom corner, when ruddy = green = blue = 0 is black, while the opposite top corner, where ruddy = green = blue = 255 ( for an 8 spot per channel show system ) , is white [ 21 ] .

Figure 4 RGB Color Cube [ 25 ]

Luminosity and Chrominance

Luminosity and chrominance based colour infinites correspond to brightness and colour. Luminosity is the sensed brightness, or grayscale degree, of a colour produced from the denseness of RGB colourss in a jutting image. It is adjustable to bring forth non merely colourss, but besides inkinesss and Whites of changing strength. On the other manus, chrominance defines the colour constituent of an image pel as it sets its chromaticity and impregnation [ 5, 25 ] . These colour infinites are denoted as YUV, YIQ, YCbCr, and YCC.

The YUV infinite is a European criterion, while, the YIQ colour infinite is North American. These two are linear infinites for NTSC and PAL systems severally, while YCbCr is a digital criterion. These methods are about indistinguishable, utilizing somewhat different RGB colour infinite transition equations. In both systems, Y is the luminosity or brightness constituent and the U and V ( or I and Q, Cb and Cr ) are the chrominance, or colour, constituents. The values are bipolar, which goes negative every bit good as positive. These are the variables that are changed by the brightness, colour, and shade controls on an image [ 21, 22 ] .

The advantage of utilizing these colour infinites is that the sum of information needed to specify a colour image is greatly reduced, based on human ocular perceptual experience. However, this compaction restricts the colour scope in these images. Many colourss that can look on a computing machine show can non be recreated on a telecasting screen. However, this undertaking used YCbCr, which is merely a lightened, gamma adjusted version of YUV colour infinite, in the JPEG processing [ 5, 22, 26 ] .

YCbCr

Human oculus sensitiveness to colour differences and brightness is the footing of YCbCr colour infinite. It deals merely with the digital representation of RGB signals in YCbCr signifier [ 21 ] . The following are the coding transition equations for non-linear signals used in the JPEG processing:

Y0 = 16 + ( 0.299 ) R0 + ( 0.587 ) G0 + ( 0.114 ) B0

Cb0 = 128 – [ ( -0.16874 ) R0 – ( 0.33126 ) G0 + ( 0.5 ) B0 ]

Cr0 = 128 + [ ( 0.5 ) R0 – ( 0.41869 ) G0 – ( 0.08131 ) B0 ]

Joint Photographic Experts Group

JPEG is a well-known criterion method of compacting photographic images. It was specified by a commission called “ Joint Photographic Experts Group ” formed in 1986 and became an international criterion since 1992 [ 5 ] .

JPEG format is one of the two most common formats used on the Web. It has the capableness of hive awaying 24 bits-per-pixel and designed for compacting both colored and gray-scale images of real-world scenes. It used in many applications such as orbiter, medical and general picture taking [ 5 ] .

JPEG Process

A JPEG procedure is divided into two parts: the encoding method and the decryption method. The encoding method specifies multi-level coding procedures: Color Space Transformation, Block Splitting, Discrete Cosine Transformation, Quantization, and Entropy Coding. On the other manus, the decryption method is the rearward procedure of the encoding procedure. It is a insistent procedure since an image can be compressed and decompressed several times [ 5 ] .

JPEG Encoding Algorithm [ 26 ]

JPEG Encoding

The JPEG encoding algorithm performs compaction in five stages:

An image is converted from RGB to YCbCr ( luminosity and chrominance colour infinites )

Then, the image is partitioned to divide blocks of 8A-8 pels.

Apply Discrete Cosine Transform ( DCT ) to each block.

Quantizing the DCT coefficients

Entropy Coding.

Color Space Transformation

Each constituent of colour images is compressed independently, as with the lossless compaction strategy. RGB colour infinite is non the most efficient manner to JPEG compress images, as it is peculiarly susceptible to colour alterations due to quantization [ 25 ] .

Color images are converted from RGB constituents into YCbCr colour infinite, which consists of the luminosity ( grayscale ) and two saturation ( colour ) constituents. Note that although some literature ( including some antecedently produced by the HyperMedia Unit ) states that images are converted into YUV constituents – or is merely really brumous about what format the image is converted to – it is non YUV precisely [ 25 ] .

Block Splitting

Each of the three YCbCr colour planes are encoded individually but utilizing the same strategy. Eight by eight pel blocks ( 64 pels in entire ) were chosen as the size for computational simpleness. If the size of the image is non a factor of 8, it is padded out to the needed size, with excess infinite being added on the left and underside of the image. When the image is decoded, these tablet pels are chopped off [ 25 ] .

Discrete Cosine Transformation

DCT separates the frequences in an image by change overing the spacial image representation into a frequence map of DCT coefficients. The low-order or DC coefficient, found at the top-left corner of each block, represents the mean value in the block. The remainder of the coefficients are the higher-order or AC coefficients, which represent the strength of more and more rapid alterations across the breadth or tallness of the block. The highest AC coefficient represents the strength of a cosine moving ridge jumping from upper limit to minimum at next pels [ 27 ] . The undermentioned equation is the idealised mathematical definition of the 8×8 DCT [ 29 ] :

Since the DCT works on a peculiar matrix format, the values are levelled off from unsigned whole numbers ( 0 to 255 ) to signed whole numbers ( -128 to 127 ) , which is done by merely deducting 128 to every matrix entry [ 28 ] .

Figure 5 – Example of Matrix Shifting [ 28 ]

From this point on, each colour constituent is processed independently, so a pel means a individual value, even in a colour image [ 27, 28 ] . To acquire the matrix signifier, we need to utilize the undermentioned equation [ 29 ] :

Finally, in order to execute DCT proper, matrix generation following this equation is used:

Wherein the illustration used in Figure? ? ? will ensue to this matrix:

Figure 6 – DCT coefficients [ 28 ]

The DCT computation is reasonably complex ; in fact, this is the most dearly-won measure in JPEG compaction. The point of making it is for easier discarding of high-frequency informations without losing low-frequency information. Since this measure is simply replacing pel blocks values by 64 DCT coefficients, the DCT itself is lossless except for unit of ammunition off mistakes [ 27 ] .

Quantization

The intent of quantisation is to fling information which is non visually important. This measure takes advantage of low sensitiveness of human oculus to high frequence brightness fluctuation to greatly cut down the sum of information in the high frequence constituents. This is done by merely spliting each constituent in the frequence sphere by a invariable for that constituent, and so rounding to the nearest whole number. This is the chief lossy operation in the whole procedure [ 30 ] .

To fling an appropriate sum of information, each of the 64 DCT coefficients is uniformly quantized in concurrence with a 64-element Quantization Table, which is specified by the application. and rounds the consequence to an whole number. The larger the quantisation coefficient, the more informations is lost, because the existent DCT value is represented less and less accurately. Each of the 64 places of the DCT end product block has its ain quantisation coefficient, with the higher-order footings being quantized more to a great extent than the low-order footings ( that is, the higher-order footings have larger quantisation coefficients ) . Furthermore, separate quantisation tabular arraies are employed for luminosity and chrominance informations, with the chrominance informations being quantized more to a great extent than the luminosity informations. This allows JPEG to work farther the oculus ‘s differing sensitiveness to luminosity and chrominance [ 27, 29 ] .

It is this measure that is controlled by the “ quality ” scene of most JPEG compressors. The compressor starts from a constitutional tabular array that is appropriate for a medium-quality scene and additions or decreases the value of each tabular array entry in reverse proportion to the requested quality. The complete quantisation tabular arraies really used are recorded in the tight file so that the decompressor will cognize how to ( about ) reconstruct the DCT coefficients [ 27 ] .

However, this undertaking used modified quantisation tabular arraies as proposed by Chang et Al. [ 3 ] , and Li and Wang [ 4 ] .

Proposed Modified and Optimized Luminance JPEG Quantization Table

8

8

1

1

1

1

1

1

1

1

1

1

1

1

1

1

28

1

1

1

1

1

1

35

28

1

1

1

1

1

44

40

31

1

1

1

1

34

55

52

39

1

1

1

32

41

52

57

46

1

1

39

44

52

61

60

51

1

46

48

49

56

50

52

50

Proposed Modified and Optimized Chrominance JPEG Quantization Table

9

1

1

1

1

1

1

1

1

1

1

1

1

1

1

50

1

1

1

1

1

1

50

50

1

1

1

1

1

50

50

50

1

1

1

1

50

50

50

50

1

1

1

50

50

50

50

50

1

1

50

50

50

50

50

50

1

50

50

50

50

50

50

50

Choice of an appropriate quantisation tabular array is something of a black art. Most existing compressors start from a sample tabular array developed by the ISO JPEG commission. It is likely that future research will give better tabular arraies that provide more compaction for the same sensed image quality. Execution of improved tabular arraies should non do any compatibility jobs, because decompressors simply read the tabular arraies from the compressed file ; they do n’t care how the tabular array was picked [ 27 ] .

Information Coding

The concluding processing measure of encoder is entropy coding. Entropy cryptography is a particular signifier of lossless informations compaction. It involves set uping the image constituents in a “ zigzag ” order using run-length encryption ( RLE ) algorithm that groups similar frequences together, infixing length coding nothings, and so utilizing Huffman coding on what is left [ 30 ] .

Before entropy cryptography, there are a few processing stairss for the quantal coefficients. The ensuing coefficients after quantisation contain a important sum of excess informations. Huffman compaction will losslessly take the redundancies, ensuing in smaller JPEG informations. An optional extension to the JPEG specification allows arithmetic encryption to be used alternatively of Huffman for an even greater compaction ratio. At this point, the JPEG information watercourse is ready to be transmitted across a communications channel or encapsulated inside an image file format [ 27 ] .

Note that the DC coefficient is treated individually from the 63 AC coefficients. The DC coefficient is a step of the mean value of the 64 image samples. Because there is normally strong correlativity between the DC coefficients of next 8×8 blocks, the quantal DC coefficient is encoded as the difference from the DC term of the old block in the encryption order, called Differential Pulse Code Modulation ( DPCM ) , and the map is as followed [ 29 ] :

DiffDC ( I ) = DC ( I ) – DC ( i-1 )

where I is the i-th block, DC ( 0 ) = 0

DPCM can normally accomplish farther compaction due to the smaller scope of the coefficient values [ 29 ] .

The staying AC coefficients are ordered into the “ zigzag ” sequence, which helps to ease information coding by puting low-frequency coefficients before high-frequency coefficients [ 29 ] .

Now the end products of DPCM and “ zigzag ” scanning can be encoded by information coding individually. It encodes the quantal DCT coefficients more compactly based on their statistical features. The JPEG proposal specifies two information coding methods: Huffman coding and arithmetic cryptography.

Entropy coding can be considered as 2-step procedure. The first measure converts the zigzag sequnce of quantal coefficients into an intermediate sequence of symbols. The 2nd measure converts the symbols to a information watercourse in which the symbols no longer hold externally identifiable boundaries. The signifier and definition of the intermediate symbols is dependent on both the DCT-based manner of operation and the information coding method.

A

Cryptography

The rise of cyberspace resulted in many systems that secure the information. Cryptography hides the context of a message but makes no effort to conceal the being of the message. To procure a message, it may besides be necessary to hide its being as it will forestall a possible research worker from surmising that the message exists. The technique used to implement this indispensable secretiveness is steganography. The word is derived from Greek, and literally means “ Covered Writing ” . The procedure of a typical cryptography technique fundamentally includes the undermentioned stairss [ 13 ] :

1. Supplying a file ( such as a gray-scale image ) called Cover Media in which the secret information is supposed to be embedded.

2. Using a stego-key in order to acquire the end product file called as stego-media. It is deserving adverting that without holding stego key the extraction process of the secret message spots is impossible.

3. Implanting the secret message file inside the screen media that can be any file ( a text file ) .

Optimal Least Significant Bit Substitution

The Optimal Least Significant Bit Substitution strategy improves the stego-image quality by happening an optimum pel after executing an accommodation procedure. Three campaigners are picked out for the pel ‘s value and compared to see which one has the closest value to the original pel value with the secret information embedded in. The best campaigner is so called the optimum pel and used to hide the secret informations [ 13 ] .

The implanting algorithm is as follows:

Let be the original pel value and k-bit ( s ) of secret information is to be embedded.

Embed K spot ( s ) of secret informations into by utilizing the LSBs method. The stego-image can be obtained.

Generate another two pel values by seting the ( k + 1 ) Thursday spot of.

Therefore, and can be calculated as follows:

Obviously, the concealed information in and is indistinguishable to because the last k-bits of them are the same.

The best estimate to the original pel value, , ( i.e. the optimum campaigner ) is found by the undermentioned expression:

Finally, all the optimum campaigners for replace the original pel values and the implanting algorithm come to its terminal.

Harmony Search Algorithm

Computer scientists have found a connexion between playing music and optimum solution happening. This relationship led to the creative activity of a new algorithm called Harmony Search. The Harmony Search Algorithm ( HSA ) was first developed by Geem et Al. in 2001. Since so, it has been applied to many optimisation jobs and has been proven of its effectivity and advantages as demonstrated in assorted applications which include design of H2O distribution webs, groundwater mold, energy-saving despatch, truss design, vehicle routing, map optimisation, technology optimisation and others [ 6 ] .

The HSA is a music-based metaheuristic optimisation algorithm, which was inspired by the observation that the purpose of music is to seek for a perfect province of harmoniousness. The attempt to happen that harmoniousness in music is correspondent to the searching of optimality in an optimisation procedure. Derived from an orchestra or a set of wind instrumentalists, each instrumentalist assigned to his ain instrument plays a note lending to the overall quality of the harmoniousness of the music produced [ 6 ] .

HSA: Optimum Solution Finding

In seeking for the perfect harmoniousness, a musician employs three methods: ( 1 ) Playing from memory, ( 2 ) Playing modified music from memory and ( 3 ) Creating music through random notes. Geem et Al. has formalized these three methods into the freshly developed optimisation algorithm. The three matching constituents of HSA are ( 1 ) Memory Consideration, ( 2 ) Pitch Adjustment and ( 3 ) Randomization. Each of these elements plays a critical function in seeking for optimum solution in HSA [ 6, 12 ] .

For a musician to make a good music, he may see bing composings. This composing consideration is related to HSA as the Harmony Memory Consideration. The Harmony Memory ensures that possible solutions are stored as elements in a solution vector.

Another manner a instrumentalist can play good music is by playing a modified bing composing. The HSA besides uses this construct and associated as the Pitch Adjustment, besides referred as the development mechanism of HSA, which is responsible for bring forthing solutions that are somewhat varied from the bing solutions.

Randomization comes in to play as the 3rd method in HS, besides referred as the geographic expedition mechanism of HSA. This ensures that the hunt for the solution is non limited in the local optima. In consequence, the solution set are more diverse.

In the plan flow, after low-level formatting, the optimisation procedure starts and terminates until expiration status is reached. Each determination variable in the HSA contributes to the optimisation. The value of each determination variable is decided with regard to harmony memory credence rate ( rhmcr ) on each base on balls. The rhmcr decides if the value of the ith variable will be taken from the values in the harmony memory [ 6 ] [ 12 ] .

HSA: Pseudo Code

Harmony Search Algorithm

Get down

Define nonsubjective map degree Fahrenheit ( x ) , ten = ( x1, x2aˆ¦ xd ) Thymine

Define harmoniousness memory accepting rate ( raccept )

Define pitch seting rate ( rpa ) and other parametric quantities

Generate Harmony Memory with random harmoniousnesss

While ( t & lt ; max figure of loops )

While ( one & lt ; = figure of variables )

If ( rand & lt ; raccept ) Choose a value for the variable I

If ( rand & lt ; rpa ) Adjust the value by adding certain sum

End if

Else Choose a Random Value

End if

End while

Accept the new memory if better

End while

Find current best solution

End

HSA: Flow Chart

Chapter 2

Literature Review

Information concealment has been the focal point of many researches and the techniques employed go more complex. Of all the techniques employed to steganography, distribute spectrum methods satisfy most demands and are particularly robust against statistical onslaughts. On the other manus, least important spot permutation methods are at the underside of the hierarchy, nevertheless it has its strengths, and are hence capable to researches to better imperceptibility [ 1 ] .

Recently, several steganographic techniques for informations concealing in JPEGs have been developed: JSteg, JP Hide & A ; Seek, F5, and OutGuess. All these techniques manipulate the quantal DCT coefficients to implant the concealed message.

Li and Wang [ 7 ] contributed a new steganographic method derived from JPEG and Particle Swarm Optimization ( PSO ) algorithm. They utilized the PSO for choosing the optimum permutation matrix. Furthermore, they used the construct of JPEG and Quantization Table Modification ( JQTM ) method for modifying the JPEG standard quantisation tabular array which yielded an increased concealment capacity in the host image. It is punctually celebrated that Optimal Least Significant Bits ( OLSB ) improves stego-image quality by implanting the secret image in the least important spots of the pels of the host image. In entirety, the significance of their survey is an accomplishment for turn toing the tradeoff between warhead and image quality. However, their proposed method still exposed some restrictions when covering with longer computational clip demand and larger size of JPEG file.

Wang et Al. [ 8 ] proposed an OLSB and Genetic Algorithm ( GA ) attack to image cryptography. They employed Peak-Signal-to-Noise Ratio ( PSNR ) computation for each permutation matrix used in the GA procedure. They besides computed for the Mean Square Error ( MSE ) from which they proved that the worst MSE of the optimum permutation is really indistinguishable to half of the worst MSE of the simple permutation. This confirms the effectivity of OLSB permutation under the worst status. Furthermore, they utilized GA to work out the job of concealing informations in the rightmost K LSBS of the host image, since this technique may necessitate longer computational clip to happen the optimum consequence when K is big. This is chiefly because the figure of possible solutions grows exponentially as K additions. In instances, nevertheless, where K a‰¤ 3, the embedding in the K LSBs of the host image should work merely all right. Furthermore, they developed an improved concealment technique based on conceptual modeling and obtained a high quality implanting consequence. Specifically, they used the rightmost four and two LSBs of noisy pels and smoothing pels, severally, to conceal the of import informations. Their experimental consequences reveal the practicality and high quality of their proposed method.

Bhattacharya, et.al [ 9 ] used Session Based Stego-key, Genetic Algorithm ( GA ) and variable spot replacing technique to better image-hiding technique. In their attack, the secret image is perturbed in two phases. It starts by come ining the 8 spot Stego Key. This Key ( K ) is a denary equivalent of first 3 Reasonably Significant Bit ( MdSB ) of KS which ranges from 0 to 7. The encrypted Kth spot of each Byte of flustered image = [ Kth spot of each Byte of Hidden image ] XOR [ Kth spot of SK ] . Then, last measure for the first phase disturbance of image, swap the lower and upper 4 spots of each byte to obtain flustered image. The 2nd phase involves the usage of GA in bring forthing Transposition operator P when the encrypted end product file is one time once more flustered.

Harmony Search Algorithm ( HSA ) is a new metaheuristic algorithm developed by Geem et Al. in 2001. [ 6 ] [ 12 ] . Since so, it has been applied to many optimisation jobs and has been proven of its effectivity and advantages as demonstrated in assorted applications. Harmony Search has outperformed the other antecedently bing optimisation algorithm which includes ACO, PSO and GA, with regard to the benchmark jobs like Traveling Salesman Problem. Furthermore, in the instance of existent universe jobs, HSA has besides outperformed other optimisation algorithms such as groundwater modeling. Another noteworthy about HSA is that, in footings of cost, it was able to happen a more optimum solution than GA. With its possible and all that HSA has yet to offer, it would be interesting to utilize this metaheuristic as an implicit in algorithm for other Fieldss particularly steganography.

Geem and Williams [ 7 ] applied Harmony Search on Maximal Covering Species Problem ( MCSP ) , an ecological preservation job which aims to continue species and their home ground. It attempts to happen the maximal figure of species while restricting the figure of package. The algorithm was tested on the so called Oregon information, which consists of 426 species and 441 packages. They modified the construction of HSA since their determination variable outputs merely two possible values. Consequently, they omitted the pitch accommodation operation for HSA to be able to accommodate to the job. The consequences have confirmed the viability of HS on ecological optimisation and outperformed Simulated Annealing in work outing the said job.

Chapter 3

Statement of the Problem

Along with the promotion of attacks turn toing these issues, are besides the development of programming malpractices. Therefore, Internet minutess are ne’er truly unafraid and are really ever hazardous ; particularly those affecting extremely sensitive informations like in telecommunication, concern and banking.

Data concealment is one method turn toing information security. It is concerned with hiding information in digital media, therefore doing it hard for a possible research worker to detect. One sub-discipline of informations concealment is steganography. Steganography surveies the encryption and the sensing of concealed information in digital communicating transmittals.

A good steganographic algorithm has to be unperceivable. Steganographic imperceptibility requires invisibleness, sufficient warhead capacity, hardiness against statistical onslaughts and image use, independency in file format, and bearer files of unsuspecting nature.

Using the construct of Optimal Least Significant Bit Substitution, this survey, will prove if Harmony Search Algorithm can be used to bring forth and back up efficient image cryptography.

Chapter 4

Proposed Methodology

4.1 Input signal

In this undertaking, we have standardized the screen image to be a 256 ten 256 pels 24-bit image at 256 grey degrees, since the best screen image is a grayscale image [ 23 ] . A 256 ten 256 pel screen image with 24 spot gray-level, the ensuing 8 ten 8 pels are 1,024 blocks. Besides, in the embedding, the secret informations used can merely be an image or a text file.

The modified quantisation tabular array gives the capacity of 36 coefficients in each block to implant informations, which yields a sensible warhead of 36,864 coefficients in the full screen image. In consequence, when K = 2, the screen image capacity is 73,728 spots or 9,216 bytes. Correspondingly, when K = 4, capacity is 147,456 spots or 18,432 bytes. This so means a restraint to the secret informations file size.

Taken from [ 5 ]

Figure 4.1: Lena Image as screen image

4.2 Data Preparation

To be able to run into the demands of the algorithm, the screen image and the secret information will be treated consequently. The screen image will undergo JPEG re-encoding which is indispensable for implanting the information, which is helpful for compaction every bit good. On the other manus, the secret information will be converted to its matching bytes, which is so decomposed into its k-bit units and treated as individual pel. The k-bit secret messages will be represented as a denary value runing from 0 to 2k – 1.

5.4 The Procedure for Implanting

To analyze and find the efficiency of HSA, the proposed embedding process is a combination of JPEG compaction procedure and the permutation matrix process of Li and Wang. Rather than utilizing PSO as the solution adventurer for the optimum permutation matrix, HSA will be utilised. Figure 5.3 describes the method to be employed.

Until the Maximum Cycle is reached, the procedures within the dotted box in Figure 5.3 are repeated. The chief embedding is done with the logical operator sole or ( EOR, XOR, EXOR, or aS• ) ; it is false when both are true or both are false. The associatory and commutative belongings of XOR makes it a executable manner to implant information. The stego-image with the highest PSNR will be the end product.

Cover Data

Secret Data

Image divider to blocks of 8 ten 8 pels

Unsigned Byte

Decompose into

K-bits

RGB – YUV colour transform

HSA Matrix Substitution

Matrix Substitution

Translation

DCT

Quantization

Implanting

PSNR

Entropy encode

MSE

JPEG Stego

Cover Data

Figure 5.3: The implanting process diagram

5.4.1 JPEG Steganography

JPEG based Steganography starts when the screen image is partitioned to blocks of 8×8 pels. Then, to get a higher concealment rate, the RGB constituents of each block is converted to YUV utilizing equations 1-3. Using DCT, transform each value in a block to DCT coefficients and scale each coefficient utilizing the modified and optimized quantisation tabular arraies ( Tables 5.1 and 5.2 ) based on Huang et.al and Chang et.al plants.

After Harmony Search Algorithm ( HSA ) finds the best-substituted informations, the informations now can be embedded to DC-to-middle frequence constituents of the quantal DCT coefficients for each 8×8 blocks specifically in the 36 coefficients utilizing the implanting order in Figure 5.6. Then k secret spots will be embedded to the least thousand important spots of each coefficient where K is either 2 or 4.

Figure 5.4: Lena Image Subdivided into Figure 5.5: A block of 8 ten 8 pels

blocks

The following process will be using of JPEG information cryptography ( Huffman coding, Run-Length cryptography and Differential Pulse Code Modulation ) in compacting the end point blocks to obtain a JPEG file. These files will incorporate of import informations gathered from compacting an image such as the quantisation tabular array and some other compressed informations that will run into the demands of the JPEG criterion.

Figure 5.6: DC-to-middle frequence implanting order

Table 5.1: Proposed Modified and Optimized Luminance JPEG Quantization Table

8

8

1

1

1

1

1

1

1

1

1

1

1

1

1

1

28

1

1

1

1

1

1

35

28

1

1

1

1

1

44

40

31

1

1

1

1

34

55

52

39

1

1

1

32

41

52

57

46

1

1

39

44

52

61

60

51

1

46

48

49

56

50

52

50

Table 5.2: Proposed Modified and Optimized Chrominance JPEG Quantization Table

9

1

1

1

1

1

1

1

1

1

1

1

1

1

1

50

1

1

1

1

1

1

50

50

1

1

1

1

1

50

50

50

1

1

1

1

50

50

50

50

1

1

1

50

50

50

50

50

1

1

50

50

50

50

50

50

1

50

50

50

50

50

50

50

5.4.2 Substitution Matrix Using HSA

A harmoniousness Ten will be defined as a permutation matrix of 2k dimensions.

Ten = x0 x1 aˆ¦ x2k-1 ( 17 )

where x0 has the value 1 in the row 0 of the permutation matrix, x1 has the value 1 in the row 1 of the permutation matrix and up to the ( ) Thursday row where the value is 1.

M = M’=

Figure 5.7: Matrix permutation

The harmoniousness is merely a unidimensional representation of the 2k x 2k permutation matrix. From the matrix M, value 1 of the row 0 is found at column 0 so the first row of M ‘ contains 0. Similarly, the 2nd column of M ‘ is 3 because the value 1 in row 1 is located at column 3. In simple footings, M ‘ is merely the column index where the value 1 is found for every row in matrix M.

5.4.3.1 Harmony Low-level formatting

From the basic HSA, the parametric quantities are redefined to analyze the algorithm ‘s public presentation. We set the MaxC as the maximal loop, HMS as the figure of employed bees, HMCR as the figure of looker-on bees and the figure of lookout bees as PAR. The first three parametric quantities are set to be configurable by the user while the tin takes 10 per centum of the employed bees.

5.4.3.2 Evaluation Function

The stego-image quality appraisal is done through the peak signal to resound ratio ( PSNR ) rating utilizing the mean squared mistake. For each harmoniousness, the public presentation will be evaluated under the gray-level utilizing MSE ( Equation 7 ) and PSNR ( Equation 8 ) .

In this survey, a alteration to the HSA ‘s fittingness map is necessary since PSNR is the standard public presentation step for fittingness map used in Steganography.

( 20 )

where I = 1,2, aˆ¦N.

5.4.3.3 Randomization

5.4.3.4 Pitch Adjustment

5.4.3.5 Memory Consideration

5.4.3.6 Termination Condition

The lone expiration status is to make the figure of rhythms set by the user. When this value is reached so the stego-image produced by the best PSNRi will be displayed.

5.5 The Procedure for Extraction

In the extraction procedure, take the contrary of the implanting procedure as shown in Figure 5.7. To pull out the kth LSBs, XOR is employed. The transpose of the HSA-selected permutation matrix will be used to decrypt the exact information embedded into the LSBs of cover-image.

Three inputs are necessary to carry on the extraction process, the stego file, the screen image, and the permutation matrix. The JPEG file or stego image is entropy decoded and copies the blocks of quantal DCT coefficients. A peculiar JPEG constituent called QDCT Luminance constituent holds the secret information. The QDCT Luminance values of the stego image are XORed with the QDCT Luminance values of the screen image to acquire the embedded information.

JPEG Stego

Entropy decode

Compose k-bits into N-bits

Secret Data Extraction

Data Translation with Matrix Substitution Transpose

Cover QDCT Luminance

Stego QDCT Luminance

Secret Data

Figure 5.7: The extraction process diagram

It is necessary to compose the spots back to its original byte signifier with regard to the value of K used. Finally, utilizing the transpose of the permutation matrix, the original secret informations will be obtained. It is unneeded to retrace the screen image from the stego image since portion of the extraction demands is the screen image itself because XOR is employed. In short, cover information is XORed with the stego informations to acquire the secret information. If the screen information is wrong so the secret information is besides wrong.