View Single Post
  #4  
Old 01-02-2019, 10:29 AM
sil's Avatar
sil (Steve)
Not even a speck of dust

sil is offline
 
Join Date: Jun 2012
Location: Canberra
Posts: 1,474
You're really talking data rate. Sensor size and resolution are irrelevant. Colour you capture at least 8bit depth usually more. thats 8 bits per pixel which is a quantity of data size. Multiply by three for the separate R, G and B values to make a 24bit colour image from. This is now 24bits for each pixel or 3 bytes of data. Multiply by the megapixel value in full numeric form to get number of bytes to make up one image. There is a limit to how much total data that can be captured and sent to storage (bandwidth of serial communications, which you'll notice is given usually in Mbps (mega bits per second) again a data size or bandwidth per second. Storage media have there own write rates again in bits per second (notice the pattern yet?). Its a matter of doing the maths, multiplying numbers and you get a huge number in bits to divide by 8 for bytes then divide recursively by 1,024 for kilobytes, megabytes etc. So yes a 36 megapixel image gives you lots of detail but when you're only interested in a small portion you are wasting most of the bandwidth, so capturing at a reduced resolution as Paul mentioned lets you capture more frames in the same amount of time as its making more effective use of the bandwidth of how much data can be transferred from the camera and written to storage media. Slow storage means loss of data due to the writing bandwidth bottleneck.
Reply With Quote