What 1080p: What Is 1080p Resolution? FHD Explained

What is the difference between 1080 p and 720 p resolution?

When it comes to video quality, we’re all used to seeing a collection of numbers splattered on our television packaging, on laptop screens and on streaming services, but what do they actually mean?

Whether it’s 1080p, 4k, 720p or 360, the numbers can all start to blur into one, making it difficult to differentiate and understand what the actual difference is. If you’re sick of seeing a bunch of numbers and not understanding what they mean, you’re in the right place.

At DEXON, we make it our responsibility to ensure you understand what you’re in for when you select a 1080p, 720p or 4k monitor. Today, we’ll be discussing what TV resolution is, the difference between 1080p and 720p and why this matters to your viewing experience.

Sign up to our newsletter

What does TV resolution mean?

Before we start getting into the specific numbers, it’s very important that we discuss the primary topic, which is TV resolution. Both 1080p and 720p refer to the TV resolution which is defined by the number of vertical and horizontal pixels. Putting it simply, TV resolution will determine the quality of the picture on your TV.

Generally speaking, the higher the resolution number, the higher the quality. On higher resolution televisions, images will appear a lot crisper and allow you to identify smaller details, whereas lower resolutions like 360p and below will appear relatively blurry with little to no sharp edges to your image.

When we talk about TV resolution, it’s important to distinguish the difference between native and image resolution. Native resolution refers to the TV or monitor’s resolution and what it can physically provide to viewers. Image resolution refers to the resolution of the image signals sent to the TV via a HDMI cable. This difference is the reason why a 4K UHD TV may display a high-definition image when you go to watch a film or select something from your TV guide.

What does the “p” stand for?

While many would naturally assume that the “p” in these numbers stands for pixels, you’d actually be wrong. P stands for progressive scan (also known as non-interlaced scanning). This refers to the format that displays, moves and stores moving images when all lines in the frame rate are sequential.

In other words, the progressive scan means that all the lines in a frame appear at the same time. With progressive scanning, the image frame refreshes every cycle. This makes images very high quality. 

What is 1080 p resolution?

Now that we understand the specifics, let’s get onto the nitty gritty. So what is 1080p? 1080p is a type of high-definition television that displays 1,980 pixels horizontally across a screen and 1,080 pixels vertically down the screen.

With 1080p, it usually pimples a wide aspect ratio of around 16:9 and a resolution of 2.1 megapixels. 1080p is one of the most common streaming resolutions on websites like YouTube, as it offers high resolution without eating up too much internet bandwidth to display the quality.

Although 1080p is often marketed as 2K resolution, it’s important to note that these are different as they have different aspect ratios and resolutions.

As with other resolutions, this number refers to the total number of pixels displayed across a screen at any time. To understand how many pixels this is, you’ll need to multiply the vertical number by the horizontal. So, that would be 1,980 x 1,080 for a 1080p television. That means that there are 2,138,400 pixels on a screen in a 1080p high-definition television.

 

What is 720p resolution?

720p resolution is another example of standard high-definition television. With 720p P&A television has a total of 1280 horizontal pixels and 720 vertical pixels. The aspect ratio for a 720p television is 16:9.

To work out the number of pixels on a 720p compatible television, you need to do the same equation applied to 1080p. To do this you can simply multiply 1280 by 720. That means on a 720p television, there are 921,600 pixels at any given time. 

Although part of the high definition club, 720p doesn’t usually compare to 1080p on today’s computer monitors. Anyone who has ever switched from 1080p to 720p resolution will tell you that the difference, although immediately very minor, makes a big difference overall. However, there are some positives.

720p takes up less bandwidth and data than 1080p or 4K, so is a lot more budget-friendly for anyone streaming content via their mobile data on their smartphone or tablet.

1080 p vs. 720 p: what are the differences?

Now you understand what both 1080p and 720p mean, let’s discuss some of the key differences. Here are some of the key differences you may notice if you have a keen eye:

  • Picture quality – Although there’s very little difference between the image quality of both 1080p and 720p, switching between the two will expose that 1080p results in a sharper clearer image than 1080p. However, this is minimal so if you’re concerned about opting for a budget-friendly 720p monitor instead of a full HD monitor, you’re unlikely to encounter any major image issues.
  • Pixel count – Of course, one of the most noticeable differences between the two resolutions is the pixel count. 720p has a pixel count of under a million, whereas 1080p has well over two million pixels. This has a slight impact on image quality and clarity.
  • Data usage – Data usage is one of the most significant differences between 1080p and 720p. When talking about data usage, we’re referring to how much data is required to stream a film or TV program per hour. For 720p, a 60 frame-per-second video will use around 1.86 GB an hour, whereas 1080 will use around 3.04GB an hour.

Where do we use 1080 and 720 resolutions?

We tend to refer most heavily to 1080p and 720p resolutions with anything involving a monitor or television. This includes streaming, watching satellite TV or even gaming. The two resolutions make the most difference when it comes to internet streaming, as each uses up different amounts of bandwidth which can impact internet speed.

The next time you’re browsing through YouTube, for example, try changing the resolution to a lower setting and see how quickly your video loads in comparison to higher resolutions. These two resolutions also have a significant impact on gaming.

This is because 1080p devices require less anti-aliasing to make images appear smoother and more cohesive due to the higher pixel count. Anti-aliasing tends to slow computers down, so opting for a 1080p gaming set-up can have a positive impact on your overall gaming experience.

Sign up to our newsletter

Final thoughts

We hope you now feel more confident in your knowledge about both 720p and 1080p. Why not check out the rest of DEXON’s blog where we discuss all aspects of AV technology, including deep-diving into the world of resolution and high-definition television?

And, if you’re looking for video equipment that’s capable of streaming moving images at a super high resolution, check out DEXON’s product family of great video wall processors, matrix switchers and controllers to take your presentations and viewing experiences to a new level!

What is 1080p? | Ubergizmo

1080p means 1920×1080 pixels of resolution; it is also known as Full-HD, FHD, WUXGA and BT. 709, and applies to TVs/displays/media resolution in which the “p” means that the resolution is “progressive” which means that there are truly 1080 vertical lines.

1080p implies a display/content aspect ratio of 16:9 and represents 2M pixels (2073600 to be exact). FHD/Full-HD is the other most-used term to market this resolution. As a reminder, “HD” was introduced as 1280×720 pixels, also known as 720p.

1080p only describes the resolution and does not imply the refresh rate for displays, or the frame rate for video content. These details are encompassed in different standards of broadcasting or video-encoding, that generate a 1080p resolution image.

1080p was and still is a standard for many things, including Blu-Ray content, Televisions, Computer screens and Mobile devices displays – just to cite the most popular ones. 1080p and FHD/Full HD are used interchangeably in “technology” communication materials, and they mean the same thing.

Can content that is not 1080p be played back on a 1080p display?

It’s possible. Every display system has a “native” resolution, which is its number of pixels. Any media content can be resized and filtered to fit any native resolution.

If the display resolution is superior to the content resolution, the image quality won’t be improved by much. There are ways to slightly enhance the image during the up-scaling phase, but it won’t be equal to content created for the target resolution.

If the display resolution is inferior to the content resolution, the resizing will yield a best-possible image quality for that display. This is down-scaling. Both up-scaling and down-scaling are part of video scalers.

Although most good video players have some image downscaling capabilities, not all of them can upscale an image to a resolution that was possibly not in existence when they were manufactured. For example, many Blu-Ray players were sold well before 4K became a standard. Therefore, they have no awareness of that resolution, and cannot upscale to it.

What’s the difference between 1080p and 1080i?

1080i and 1080p can cause some confusion, although the use of 1080i is on its way out. “i’ stands for “interlaced” (and “p” is “progressive” as we said earlier).

The difference is that 1080i displays have only half the vertical resolution of 1080p, so ~540 (actual) lines. They can display 1080p content by “interlacing” the images lines, every other frame.

This results in a slightly better image than having 1920×540, but it is noticeably not as good as true 1080p. This interlacing technique was widely used when analog displays were very common and yet unable to physically achieve 1080p resolution. 1080i isn’t used much these days, but you may bump into it from time to time.

Filed in Photo-Video. Read more about Display.

What are the similarities and differences between 1080p and 1080i?

The era of the newest technologies opens the widest possibilities and without any doubt an unlimited choice. Everything seems to be extremely easy and simple, but until the actual matter comes directly to the choice. Our article deals with a rather difficult dilemma between 1080p and 1080i technologies, in particular about their similarities and differences.

HDTV stands for High Definition Television. HDTV is a broadcast standard approved by the FCC for home use and television applications, and this standard is responsible for the rules for HDTV imaging.

HDTV broadcasting uses 2 main standards:

720p – 720 lines of images appear gradually (progressive). 720p delivers a very smooth image thanks to its progressive scan formula. Also, while 720p is considered high definition, it has less bandwidth than 1080i, which we’ll cover next.

1080i (1080 image lines scanned in alternate fields, each field consists of 540 lines) is the most commonly used HDTV format, and has been adopted by PBS, NBC, CBS as the HDTV broadcast standard. Row data traverses the screen – from top to bottom. In this case, the even fields are displayed first, and then the odd ones. The result of this process is getting a whole picture of 1080 pixel lines and lines every 30 seconds.

The 1080p format is also currently used, in which 1080 lines of resolution appear progressively while still providing the highest quality HD picture.

However, 1080p is not part of the FCC, which approves HDTV broadcast standards, and is only needed to process 1080p video for compatibility with projectors and TVs.

There is a lot of talk about 1080p being the “Holy Grail” of high definition.

This is all purely marketing, and access to 1080p is simply determined by whether your TV has the ability to receive a 1080p signal from the source directly, or whether scaling and processing of all input signals to 1080p will be applied.

Advances in screen resolution bring a lion’s share of comfort and innovation to the television industry and to our lives. Both 1080i and 1080p are high-definition display formats designed primarily for HDTV. The key feature of these formats is the pixel resolution of 1920x1080p. But, if you look, these technologies have more in common than different. After all, both signals carry the same information, although they differ in the technology of its delivery and display on the screen.

In 1080p, the frame is transmitted gradually, so the display is identical. In other words, both odd and even fields that form a complete frame are displayed on the screen at the same time. As a result of these processes, a person receives a sharper image with less motion effect and more rounded edges.

It is also worth understanding that 1080p has its own significant differences. For example, 1080p/24 is also called the Standard Motion Picture Film Frame Rate, which displays a frame every 24 seconds. That is, in other words, the frame rate of a standard 35mm film is transferred from the source to its own, so to speak, “native” 24 frames per second. This fact suggests that in order to see this image on your screen, it must accept and read 1080p resolution at 24 fps. At the same time, 1080p/30 Recorded Video Frame Rate displays a frame every 30 seconds, and 1080/60 – twice every 30 seconds – Enhanced Video Frame Rate.

Among other things, when connecting a computer to an HDTV via DVI or HDMI, the quality of the transmitted content plays a major role, since the PC sends a signal at a speed of 60 fps without repeating this action several times.

From the world practice of 1080i, it can be noted that this particular system is the most used for the vast majority of satellite and terrestrial platforms broadcasting high-definition channels in Full HD format.

And in conclusion, I would like to note that in order to form a high-quality image on a TV, one must not forget about such factors as color accuracy, contrast, brightness, maximum viewing angle, whether the image is an interlaced or progressive scan. All this contributes to image quality.

WHAT ARE THE VIDEO PERMISSIONS?

1. What is a high definition security camera?

All image formats with a resolution of 1280×720 or higher are considered high definition (HD). In the modern world of video surveillance, there are two directions: analog and digital. Accordingly, there are analog and network (IP) HD cameras. Resolution 960H (NTSC: 960×480) is not classified as HD. Current HD resolution formats include: 1.0 megapixel (720p), 1.3 megapixel (960p), 2 megapixel (1080p), 3 megapixel, 5 megapixel, 8 megapixel (4K UHD), 12 megapixel, 33 megapixel (8K UHD) .
In general, HD network cameras provide slightly better image quality than analog HD cameras of the same resolution (eg 720p).
Recently, one of our customers reported that he installed a video surveillance system on 720p AHD cameras (manufacturer claimed 1000TVL) and was dissatisfied: the image quality of these 720p AHD cameras turned out to be even worse than the old cameras 960H. Why this happened, we will tell in the fourth part of the article.

2. Benefits of High Definition

Compared with standard definition, HD technology has enhanced the image detail. Image quality is further enhanced by various enhancement technologies such as progressive scan, 2D/3D dynamic noise reduction, wide dynamic range (WDR), etc. In short, HD delivers superb picture quality. Conventional standard 9 analog camera60H gives a resolution of 960H/WD1, which is 960×480 pixels (for NTSC) or 960×576 pixels (for PAL). After the signal has been digitized in the DVR or Hybrid DVR, the image will consist of a maximum of 552960 pixels (0.5 megapixels).
A high definition camera can cover a much wider area than a normal camera. Take for example a 12-megapixel panoramic fisheye camera with a 360-degree field of view. With a built-in 12MP image sensor and ePTZ (Virtual Pan/Tilt/Zoom) and split image capability, it can replace multiple conventional CCTV cameras at once, greatly reducing installation costs and maintenance fees.
Excellent compatibility is another advantage of HD. Whether you shop online or go to local electronics stores, you’ve noticed that all TVs, camcorders and digital cameras support 1080p HD (FullHD). Accordingly, if you want this equipment to work with your CCTV system, you should choose a CCTV system that supports 1080p. Also, we understand that 4K is the current trend, it is logical to expect 4K UHD video surveillance system to become popular in the future.

3. Various HD resolution formats

High-definition IP cameras are at the forefront of video surveillance systems. They can provide higher quality video with greater image detail and wider coverage than standard definition cameras. You can choose the right format of network (IP) cameras according to your requirements. For example, for face recognition or license plate recognition applications, choose 1080p or higher megapixel network cameras. To find out the resolution of a particular HD format, refer to the following table:

Format Resolution (in pixels) Aspect ratio Reamer
1MP/720P 1280×720 16:9 Progressive
SXGA/960P 1280×960 4:3 Progressive
1. 3MP 1280×1024 5:4 Progressive
2MP/1080P 1920×1080 16:9 Progressive
2.3MP 1920×1200 16:10 Progressive
3MP 2048×1536 4:3 Progressive
4MP 2592×1520 16:9 Progressive
5MP 2560×1960 4:3 Progressive
6MP 3072×2048 3:2 Progressive
4K Ultra HD 3840×2160 16:9 Progressive
8K Ultra HD 7680×4320 16:9 Progressive

4 Choosing an HD Surveillance Camera

Besides image resolution, what else should be considered when choosing HD network cameras? Here we will share information on how to choose the right HD cameras from an installer’s point of view.

Low illumination

As you know, the CCTV camera does not work like a consumer camera – the CCTV camera cannot use the flash when capturing an image / video. If the camera has poor performance in low light, its use is limited. When working in low light conditions, such a camera ”goes blind”, despite its very high resolution.

High resolution is a double-edged sword: the sensor manufacturer cannot increase the die area indefinitely, so increasing the resolution is associated with a decrease in the size of the pixel itself for the same sensor die sizes (usually 1/3”), so each pixel has less amount of light, resulting in a decrease in sensitivity as resolution (megapixels) increases.

Currently, the optimal value for most video surveillance applications is 2MP resolution (1080p/FullHD), and most sensors from the Low Illumination series are designed for this resolution.

Video delay (Time lag)

All network (IP) surveillance cameras have some delay compared to real time, and the cost or quality of the camera is not the determining factor for this delay. For example, for the same 720p image, the video delay time for some cameras is 0.1s, and for some other network cameras, this time may be 0.4s, and even more than 0.7s. Why is the video delay time different? Unlike an analog camera, a network camera compresses the video (a process called encoding), and the user device decodes the video for display, resulting in video delay. Generally, the shorter the latency, the better the image processor’s capabilities. This means that you need to select a network camera with the lowest video latency.

Heat dissipation

When the security camera is in operation, it generates heat, especially when the infrared light is turned on at night. This rule is true for any CCTV camera. Excessive heat generation increases the chance of overheating and damage to the camera. When choosing megapixel cameras, pay attention to:

Choose a camera with lower power consumption. Low power consumption means the camera saves power, generates less heat. The downside: in winter, a camera with low heat generation can freeze (usually this concerns the IR filter), and low consumption means that weak IR illumination is installed, this should also be taken into account.

Consider using a camera with enhanced performance in low light (no infrared or other artificial light). Such a camera in low light conditions can capture images even in the dark (> 0.009 – 0.001 lux).

Choose a camera with a housing with good heat dissipation. A metal case is preferable to a plastic one. To ensure reliable performance, the elite series network cameras use a finned heatsink on the body to maximize heat dissipation, which greatly helps the camera to ensure reliable performance.

Price

”High price = high quality” – in most cases this rule is true. Based on research reports, the consumer often believes that a higher price of a product indicates a higher level of quality. But price isn’t the only indicator of good quality, especially when buying “Made in China” products. I have been working in the video surveillance industry for over five years and I can say that end users, integrators and installers can get high quality products from Chinese suppliers/manufacturers at a very competitive price. High-end cameras may have a unique body design, offering special features not found in other products.

Technical support

In conclusion, network cameras should also have good technical support. While IP cameras are becoming easier to set up and operate, end users may encounter technical issues that require outside help. Faced with such a problem, you will receive technical support from us within 1-2 days, which is quite acceptable. It is precisely because of this that I personally do not advise buying CCTV cameras on Aliexpress, since in the future you are unlikely to receive technical support from sellers online support.

Megapixels vs. TV lines

Device type TVL/Megapixels NTSC final resolution Final resolution PAL Megapixels NTSC Megapixel PAL
Analog matrix SONY CCD 480TVL 510H*492V 500H*582V ≈0. 25 megapixels ≈0.29 megapixels
600TVL 768*494 752*582 ≈0.38 megapixels ≈0.43 megapixels
700TVL 976*494 976*582 ≈0.48 megapixels ≈0.56 megapixels
Analog sensors SONY CMOS 1000TVL 1280*720 ≈0.92 megapixels
IP cameras and IP recorders 720P 1280*720 ≈0.92 megapixels
960P 1280*960 ≈1.23 megapixels
1080P 1920*1080 ≈2.07 megapixels
3MP 2048×1536 ≈3.14 megapixels
5MP 2592×1920 ≈4.97 megapixels
Analog recorders QCIF 176*144 ≈0. 026 megapixels
CIF 352*288 ≈0.1 megapixels
HD1 576*288 ≈0.16 megapixels
D1(FCIF) 704*576 ≈0.4 megapixels
960H 928*576 ≈0.53 megapixels
QVGA 320×240 4:3 76.8 kpix
SIF (MPEG1 SIF) 352×240 22:15 84.48 kpix
CIF (MPEG1 VideoCD) 352×288 11:9 101.37 kpix
WQVGA 400×240 5:3 96 kpix
[MPEG2 SV-CD] 480×576 5:6 276.48 kpix
HVGA 640×240 8:3 153. 6 kpix
HVGA 320×480 2:3 153.6 kpix
nHD 640×360 16:9 230.4 kpix
VGA 640×480 4:3 307.2 kpix
WVGA 800×480 5:3 384 kpix
SVGA 800×600 4:3 480 kpix
FWVGA 848×480 16:9 409.92 kpix
qHD 960×540 16:9 518.4 kpix
WSVGA 1024×600 128:75 614.4 kpix
XGA 1024×768 4:3 786,432 kpix
XGA+ 1152×864 4:3 995.3 kpix
WXVGA 1200×600 2:1 720 kpix
HD 720p 1280×720 16:9 921. 6 kpix
WXGA 1280×768 5:3 983.04 kpix
SXGA 1280×1024 5:4 1.31 Mpx
WXGA+ 1440×900 8:5 1.296 MP
SXGA+ 1400×1050 4:3 1.47 Mpx
XJXGA 1536×960 8:5 1.475 megapixels
WSXGA (?) 1536×1024 3:2 1.57 MP
WXGA++ 1600×900 16:9 1.44 MP
WSXGA 1600×1024 25:16 1.64MP
UXGA 1600×1200 4:3 1.92 Mpx
WSXGA+ 1680×1050 8:5 1.76 MP
Full HD 1080p 1920×1080 16:9 2.07 Mpx
WUXGA 1920×1200 8:5 2. 3 MP
2K 2048×1080 256:135 2.2 MP
QWXGA 2048×1152 16:9 2.36 MP
QXGA 2048×1536 4:3 3.15 MP
WQXGA / Quad HD 1440p 2560×1440 16:9 3.68 MP
WQXGA 2560×1600 8:5 4.09 MP
QSXGA 2560×2048 5:4 5.24 megapixels
3K 3072×1620 256:135 4.97 MP
WQXGA 3200×1800 16:9 5.76 MP
WQSXGA 3200×2048 25:16 6.55 MP
QUXGA 3200×2400 4:3 7.68 MP
QHD 3440×1440 43:18 4.95 Mpx
WQUXGA 3840×2400 8:5 9. 2 Mpx
4K UHD (Ultra HD) 2160p 3840×2160 16:9 8.3 MP
4K UHD 4096×2160 256:135 8.8 Mpx
4128×2322 16:9 9.6 Mpx
4128×3096 4:3 12.78 MP
5120×2160 21:9 11.05 Mpx
5K UHD 5120×2700 256:135 13.82 Mpx
5120×2880 16:9 14.74 Mpx
5120×3840 4:3 19.66 Mpx
HSXGA 5120×4096 5:4 20.97 Mpx
6K UHD 6144×3240 256:135 19.90 Mpx
WHSXGA 6400×4096 25:16 26.2 Mpx
HUXGA 6400×4800 4:3 30. 72 Mpx
7K UHD 7168×3780 256:135 27.09 Mpx
8K UHD (Ultra HD) 4320p / Super Hi-Vision 7680×4320 16:9 33.17 Mpx
WHUXGA 7680×4800 8:5 36.86 megapixels
8K UHD 8192×4320 256:135 35.2 Mpx

Table of H.264 H.264 video surveillance camera recording hours at D1 resolution, 1Mp (1280*720), 2Mp (1920*1080), 3Mp(2048*1536), 5M(2560×1920) at frequency frames 8, 12, 25 fps and different intensity of movement.

To reduce the amount of video information stored in DVRs, various compression algorithms are used.

The main advantage of the H.264 algorithm is inter-frame compression, in which for each next frame its differences from the previous one are determined, and only these differences are stored in the archive after compression. During the operation of the algorithm, reference frames (I-frames) are periodically stored in the archive, which are a compressed full image, and then only changes, called intermediate frames (P- and B-frames), are stored for 25-100 frames. This compression method allows you to get high image quality with a small volume, but requires more calculations than compression in the MJPEG standard.

When using the MJPEG algorithm, each frame is subjected to compression, regardless of whether it differs from the previous one. Therefore, the only way to reduce the amount of stored data is to increase compression and thereby reduce the quality of the recording. This method is used only in simple stand-alone video recorders that do not require long-term storage of information.

Another advantage of the h364 algorithm is its ability to work in constant bit rate (CBR) mode, in which the degree of video information compression changes dynamically and thus the volume of the created archive per second is clearly fixed.