Sunday, January 15, 2006
Saturday, December 31, 2005
Zero day Flaw
Friday, December 09, 2005
These Nokia codes will work on most Nokia mobile phones, however we accept no responsibility of any kind for damage done to your phone whilst trying these Nokia secret codes.
Nokia codeCode function
*3370# This Nokia code activates Enhanced Full Rate Codec (EFR) - Your Nokia cell phone uses the best sound quality but talk time is reduced my approx. 5%
#3370#Deactivate Enhanced Full Rate Codec (EFR)
*#4720# Activate Half Rate Codec - Your phone uses a lower quality sound but you should gain approx 30% more Talk Time
*#4720# With this Nokia code you can deactivate the Half Rate Codec*#0000# Displays your phones software version, 1st Line : Software Version, 2nd Line : Software Release Date, 3rd Line : Compression Type *#9999# Phones software version if *#0000# does not work
*#06# For checking the International Mobile Equipment Identity (IMEI Number)
#pw+1234567890+1#Provider Lock Status. (use the "*" button to obtain the "p,w" and "+" symbols)#pw+1234567890+2#Network Lock Status. (use the "*" button to obtain the "p,w" and "+" symbols)
#pw+1234567890+3#Country Lock Status. (use the "*" button to obtain the "p,w" and "+" symbols)#pw+1234567890+4#SIM Card Lock Status.
(use the "*" button to obtain the "p,w" and "+" symbols)*#147#This lets you know who called you last (Only vodofone)*#1471#Last call (Only vodofone)
*#21#This phone code allows you to check the number that "All Calls" are diverted to
*#2640#Displays phone security code in use
*#30#Lets you see the private number
*#43#Allows you to check the "Call Waiting" status of your cell phone.
*#61#Allows you to check the number that "On No Reply" calls are diverted to
*#62#Allows you to check the number that "Divert If Unreachable (no service)" calls are diverted to
*#67#Allows you to check the number that "On Busy Calls" are diverted to
*#67705646#Phone code that removes operator logo on 3310 & 3330
*#73#Reset phone timers and game scores
*#746025625#Displays the SIM Clock status, if your phone supports this power saving feature "SIM Clock Stop Allowed", it means you will get the best standby time possible *#7760#Manufactures code
*#7780#Restore factory settings
*#8110#Software version for the nokia
*#92702689#Displays - 1.Serial Number, 2.Date Made, 3.Purchase Date, 4.Date of last repair (0000 for no repairs), 5.Transfer User Data. To exit this mode you need to switch your phone off then on again
*#94870345123456789#Deactivate the PWM-Mem
**21*number#Turn on "All Calls" diverting to the phone number entered**61*number#Turn on "No Reply" diverting to the phone number entered**67*number#Turn on "On Busy" diverting to the phone number entered12345This is the default security codepress and hold #Lets you switch between lines
Bypass the SP lock
With a Nokia 16xx/21xx/31xx/51xx/81xx that are SIMlocked to one provider you can bypass the SP lock like this:
1] Insert sim card of different provider.
2] Turn on the phone and press the UP VOLUME key for 3 sec. then release it and the phone says PIN CODE ?
3] Press the "C" key.
4] Then Press * and wait until it disappear and appear again, then press * one more timeand04*PIN*PIN*PIN#
The phone now says: PIN CODE CHANGED (or ACCEPTED)
and the SIM card is accepted until you restart the phone again.
Thanks to CrashOut for the information.
Thursday, December 08, 2005
Some of them i use r given below
Astalavista here we get links 2 all cracking sites
KEYGEN This is the best site for finding cracks
CRACK DB The common crack database
PHAZEDDL one of the best site
Wanna flash games
Hey i here by give a link 2 brokencode
Friday, December 02, 2005
Tidbits on Image Processing
In the broadest sense, image processing includes any form of information processing in which the input is an image. Many image processing techniques derive from the application of signal processing techniques to the domain of images — two-dimensional signals such as photographs or video.
Most of the signal processing concepts that apply to one-dimensional signals — such as resolution, dynamic range, bandwidth, filtering, etc. — extend naturally to images as well. However, image processing brings some new concepts — such as connectivity and rotational invariance — that are meaningful or useful only for two-dimensional signals. Also, certain one-dimensional concepts — such as differential operators, edge detection, and domain modulation — become substantially more complicated when extended to two dimensions.
The name image processing is most appropriate when both inputs and outputs are images. The extraction of arbitrary information from images is the domain of image analysis, which includes pattern recognition when the patterns to be identified are in images. In computer vision one seeks to extract more abstract information, such as the 3D description of a scene from video footage of it. The tools and concepts of image processing are also relevant to image synthesis from more abstract models, which is a major branch of computer graphics.
The enormous size of images, compared to other data streams commonly processed by computers, and the need to process images quickly, has led to whole sub-fields on high speed image processing. A few decades ago, image processing was done largely in the analog domain, chiefly by optical devices. Optical methods are inherently parallel, and for that reason they are still essential to holography and a few other applications. However, as computers keep getting faster, analog techniques are being increasingly replaced by digital image processing techniques — which are more versatile, reliable, accurate, and easier to implement. Specialized hardware is still used for digital image processing: computer architectures based on pipelining have been the most commercially successful, but many different massively parallel architectures were developed as well. These architectures, especially pipelined architectures, are still commonly used in video processing systems. However, these days commercial image processing tasks with a processing speed of a few images per second or less are increasingly done by software libraries running on conventional personal computers.
The goal of edge detection is to mark the points in an image at which the intensity changes sharply. Sharp changes in image properties usually reflect important events and changes in world properties. Edge detection is a research field within image processing and feature extraction.
Edges may be viewpoint dependent - these are edges that may change as the viewpoint changes, and typically reflect the geometry of the scene, objects occluding one another and so on, or may be viewpoint independent - these generally reflect properties of the viewed objects such as markings and surface shape. In two dimensions, and higher, the concept of a projection has to be considered.
A typical edge might be (for instance) the border between a block of red color and a block of yellow; in contrast a line can be a small number of pixels of a different color on an otherwise unchanging background. There will be one edge on each side of the line.Edges play quite important role in all applications of image processing
Detecting an edge
Taking an edge to be a change in intensity taking place over a number of pixels, edge detection algorithms generally calculate a derivative of this intensity change. To simplify matters, we can consider the detection of an edge in 1 dimension. In this instance, our data can be a single line of pixel intensities. For instance an edge can clearly be detected between the 4th and 5th pixels in the following 1-dimensional data:
5 7 6 4 152 148 149
Calculating the 1st derivative
Many edge-detection operators are based upon the 1st derivative of the intensity - this gives us the intensity gradient of the original data. Using this information we can search an image for peaks in the intensity gradient.
If I(x) represents the intensity of pixel x, and I′(x) represents the first derivative (intensity gradient) at pixel x, we therefore find that:
For higher performance image processing, the 1st derivative can therefore be calculated (in 1D) by convolving the original data with a mask:
−1 0 1
Calculating the 2nd derivative
Some other edge-detection operators are based upon the 2nd derivative of the intensity. This is essentially the rate of change in intensity gradient and is best at detecting lines :- as noted above, a line is a double edge, hence we will see an intensity gradient on one side of the line, followed immediately by the opposite gradient on the opposite site. Therefore we can expect to see a very high change in intensity gradient where a line is present in the image. To find lines, we can search the results for zero-crossings of the change in gradient.
If I(x) represents the intensity at point x, and I′′(x) is the second derivative at point x:
Again most algorithms use a convolution mask to quickly process the image data:
+1 −2 +1
Once we have calculated our derivative, the next stage is to apply a threshold, to determine where the results suggest an edge is present. The lower the threshold, the more lines will be detected, and the results become increasingly susceptible to noise, and also to picking out irrelevant features from the image. Conversely a high threshold may miss subtle lines, or sections of lines.
A commonly used compromise is thresholding with hysteresis. This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of a line. Once we have a start point, we trace the edge's path through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be in continuous lines, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge.
Edge detection operators
• 1st order: Roberts Cross, Prewitt, Sobel, Canny, Spacek
• 2nd Order: Laplacian, Marr-Hildreth
Currently, the Canny operator is most commonly used, followed by Marr-Hildreth. Very many operators have been published but so far none have any significant advantage over the Canny operator in general situations. Work on multi-scale techniques is still very much in the labs.
Retrieved from "http://en.wikipedia.org/wiki/Edge_detection"
From Wikipedia, the free encyclopedia.
In image processing, the Sobel operator is a simple edge detection algorithm using the 1st derivative of the intensity information.
The operator uses two 3x3 kernels convolved with the original image to produce a map of intensity gradient. The areas of highest gradient are where the intensity of the image changes rapidly over a few pixels, and are thus likely to represent edges.
Two convolution kernels are needed to detect the first-order derivative of both horizontal and vertical changes in a 2-dimensional image. If we define as the source image, we can compute:
Which can then be combined to give the overall magnitudes using:
Using this information, we can also calculate the gradient's direction:
Where Θ will be 0 for a vertical edge, and will increase for edges anti-clockwise of this.
In mathematics and in particular, functional analysis, convolution is a mathematical operator which takes two functions f and g and produces a third function that in a sense represents the amount of overlap between f and a reversed and translated version of g. A convolution is a kind of very general moving average, as one can see by taking one of the functions to be an indicator function of an interval.
From Wikipedia, the free encyclopedia.
In computing, a grayscale or greyscale digital image is an image in which the value of each pixel is a single sample. Displayed images of this sort are typically composed of shades of gray, varying from black at the weakest intensity to white at the strongest, though in principle the samples could be displayed as shades of any color, or even coded with various colors for different intensities. Grayscale images are distinct from black-and-white images, which in the context of computer imaging are images with only two colors, black and white; grayscale images have many shades of gray in between. In most contexts other than digital imaging, however, the term "black and white" is used in place of "grayscale"; for example, photography in shades of gray is typically called "black-and-white photography". The term monochromatic in some digital imaging contexts is synonymous with grayscale, and in some contexts synonymous with black-and-white.
Grayscale images are often the result of measuring the intensity of light at each pixel in a single band of the electromagnetic spectrum (e.g. visible light).
Grayscale images intended for visual display are typically stored with and 8 bits per sample, which allows 256 intensities (i.e., shades of gray) to be recorded, typically on a non-linear scale. The accuracy provided by this format is barely sufficient to avoid visible banding artifacts, but very convenient for programming. Technical uses (e.g. in medical imaging or remote sensing applications) often require more levels, to make full use of the sensor accuracy (typically 10 or 12 bits per sample) and to guard against roundoff errors in computations. Sixteen bits per sample (65536 levels) appears to be a popular choice for such uses.
In the reality television show Big Brother the photographs of evicted contestants on the "Memory Wall" (Seasons 1-5) and "Memory Board" (Season 6) are shown in Grayscale, while contestants who are still in the game are shown in full color.
Segmentation (image processing)
From Wikipedia, the free encyclopedia.
In image analysis, segmentation is the partition of a digital image into multiple regions (sets of pixels), according to some criterion.
The goal of segmentation is typically to locate certain objects of interest which may be depicted in the image. Segmentation could therefore be seen as a computer vision problem. Unfortunately, many important segmentation algorithms are too simple to solve this problem accurately: they compensate for this limitation with their predictability, generality, and efficiency.
A simple example of segmentation is thresholding a grayscale image with a fixed threshold t: each pixel p is assigned to one of two classes, P0 or P1, depending on whether I(p) < t or I(p) ≥ t.
Some other segmentation algorithms are based on segmenting images into regions of similar texture according to wavelet or Fourier transforms.
Segmentation criteria can be arbitrarily complex, and take into account global as well as local criteria. A common requirement is that each region must be connected in some sense.
An example of a global segmentation criterion is the famous Mumford-Shah functional. This functional measures the degree of match between an image and its segmentation. A segmentation consists of a set of non-overlapping connected regions (the union of which is the image), each of which is smooth and each of which has a piecewise smooth boundary. The functional penalizes deviations from the original image, deviations from smoothness within in each region and the total length of the boundaries of all the regions. Mathematically,
Thresholding is the simplest method of image segmentation. Individual pixels in a grayscale image are marked as 'object' pixels if their value is greater than some threshold value (assuming an object to be brighter than the background) and as 'background' pixels otherwise. Typically, an object pixel is given a value of '1' while a background pixel is given a value of '0'. The key parameter here is obviously the choice of the threshold. Several different methods for choosing a threshold exist. The simplest method would be to choose the mean or median value, the rationale being that if the object pixels are brighter than the background, they should also be brighter than the average. In a noiseless image with uniform background and object values, the mean or median will work beautifully as the threshold, however generally speaking, this will not be the case. A more sophisticated approach might be to create a histogram of the image pixel intensities and use the valley point as the threshold. The histogram approach assumes that there is some average value for the background and object pixels, but that the actual pixel values have some variation around these average values. However, computationally this is not as simple as we'd like, and many image histograms do not have clearly defined valley points. Ideally we're looking for a method for choosing the threshold which is simple, does not require too much prior knowledge of the photo, and works well for noisy images.
Special Yo to Nitin for collecting these stuff!
Saturday, August 27, 2005
The basic problem when intruded into a system is how 2 make these Avs inactive. This is the greatest problem faced by all. Inorder 2 overcome this difficulty, the link provided below will give u the names of the files which guard intrusion in Avs. Anyway if u can delete or end these programs, u cant be blocked in that system by tht AV.
Friday, August 26, 2005
Introduction 2 site
This site is meant 4 those guys interested in securities and their loop holes both in Software and Hardware. Services 4 people should be free thru this site and any objectionable data seen in this site like money transfers, links to any adult sites etc are serious prohibited.
Any one can be a member of the site and who needs 2 be member should email at firstname.lastname@example.org. Members can blog their papers on security, programming, various notes on kernels etc are always welcome 2 site. Papers with a contact e-mail id is always preferred.
COMING SOON : THE INTRUDERS