mirror of https://github.com/dsoprea/go-exif.git
Given a stream of data, it is possible to determine the beginning of EXIF data but not the end. Therefore, either an image-aware implementation must know how to parse an image and extract the EXIF data or a brute-force search implementation (one of which is provided by this project) must find the start anchor and then return all bytes from that to the end of the file. We have been made aware of some use-cases where a brute-force search might be unavoidable due to trust or stability issues with the image structure. This leads to large allocations. This can be avoided by accomodating support that will allow for both a byte-slice or an `io.ReadSeeker`. Since the EXIF structure is typically not read- intensive (a couple of kilobytes if no thumbnail is present), this should have a minimal performance impact. Closes #42 |
||
---|---|---|
.. | ||
README.md | ||
accessor.go | ||
exif_927C_maker_note.go | ||
exif_927C_maker_note_test.go | ||
exif_8828_oecf.go | ||
exif_8828_oecf_test.go | ||
exif_9000_exif_version.go | ||
exif_9000_exif_version_test.go | ||
exif_9101_components_configuration.go | ||
exif_9101_components_configuration_test.go | ||
exif_9286_user_comment.go | ||
exif_9286_user_comment_test.go | ||
exif_A000_flashpix_version.go | ||
exif_A000_flashpix_version_test.go | ||
exif_A20C_spatial_frequency_response.go | ||
exif_A20C_spatial_frequency_response_test.go | ||
exif_A300_file_source.go | ||
exif_A300_file_source_test.go | ||
exif_A301_scene_type.go | ||
exif_A301_scene_type_test.go | ||
exif_A302_cfa_pattern.go | ||
exif_A302_cfa_pattern_test.go | ||
exif_iop_0002_interop_version.go | ||
exif_iop_0002_interop_version_test.go | ||
gps_001B_gps_processing_method.go | ||
gps_001B_gps_processing_method_test.go | ||
gps_001C_gps_area_information.go | ||
gps_001C_gps_area_information_test.go | ||
registration.go | ||
type.go |
README.md
0xa40b
The specification is not specific/clear enough to be handled. Without a working example ,we're deferring until some point in the future when either we or someone else has a better understanding.