{"id":1620,"date":"2021-06-02T15:21:26","date_gmt":"2021-06-02T15:21:26","guid":{"rendered":"http:\/\/sites.saic.edu\/aoc\/?p=1620"},"modified":"2021-06-02T15:21:26","modified_gmt":"2021-06-02T15:21:26","slug":"agisoft-models-from-photos","status":"publish","type":"post","link":"https:\/\/sites.saic.edu\/aoc\/agisoft-models-from-photos\/","title":{"rendered":"Agisoft, Models from Photos"},"content":{"rendered":"<p>Written by: <strong>Jack Wilson<\/strong><b><\/b><\/p>\n<p>If you ever find yourself wanting to make a 3D scan of an object but lacking any of the equipment necessary to do so, don\u2019t fret. With a photogrammetry program and a lot of patience, anyone can transform a normal collection of photos of an object into a 3D model ready for use. I\u2019m going to share the process I used to transform photos of a teddy bear into a digital 3D model, including even its own texture, using the photogrammetry program Agisoft Metashape.<\/p>\n<p>&nbsp;<\/p>\n<p>Photogrammetry is the means of taking, measuring, and interpreting photographs to obtain accurate information about the real world. Many maps, drawings, and measurements have used photogrammetry to ensure their accuracy. Recently, Photogrammetry programs have advanced far enough to the point where they can measure a group of ordinary photos and interpret that information as a 3D model.<\/p>\n<p>&nbsp;<\/p>\n<p>Before Metashape could do any of that for me though, I had to take a lot of photographs. I started by placing the bear in the light of a window. Then I took 144 photos to catch every angle of the Teddy bear. Agisoft requires at least 50-65 photos to make a somewhat accurate model, but the quality of the finished model will increase if it gets more photos. I was sure to circle the bear with my camera and take extra pictures of areas I was worried Agisoft would struggle with.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-1644 size-large aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Uneven-1024x576.jpg\" alt=\"\" width=\"1024\" height=\"576\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Uneven-1024x576.jpg 1024w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Uneven-300x169.jpg 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Uneven-768x432.jpg 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Uneven-1536x864.jpg 1536w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Uneven-150x84.jpg 150w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Uneven.jpg 1778w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>After taking the photos, I opened Metashape and uploaded my photos. Under \u201cWorkflow\u201d there is the option to add individual photos or a folder of photos. Adding the first round of photos creates a new chunk within Metashape, and all of the photos added can be found in the window under the model viewer.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-1647 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Add-photos-1024x535.png\" alt=\"\" width=\"1024\" height=\"535\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Add-photos-1024x535.png 1024w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Add-photos-300x157.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Add-photos-768x401.png 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Add-photos-1536x802.png 1536w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Add-photos-2048x1069.png 2048w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Add-photos-150x78.png 150w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>Next, align these photos into a 3d space. Under \u201cWorkflow\u201d, again, is the option to align photos. Clicking on this will make an options window pop up. Generally, this is where I determined how accurate I wanted the end result to be since all future processes rely on this first step. There is the option to change the accuracy to either \u201cLowest\u201d, \u201dLow\u201d, \u201dMedium\u201d, \u201dHigh\u201d, or \u201cHighest\u201d. Metashape then spends some time putting the photos into place. It could take anywhere from five minutes to hours, or even days, depending on how many photos there are, which accuracy setting was selected, and how strong the computer doing the process is. When it\u2019s finally done, a cluster of points and blue rectangles will fill the \u201cModel\u201d window. Metashape will also tell you how many photos it was unable to align after this process.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-1649 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/low-density-cloud-1024x449.png\" alt=\"\" width=\"1024\" height=\"449\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/low-density-cloud-1024x449.png 1024w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/low-density-cloud-300x131.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/low-density-cloud-768x336.png 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/low-density-cloud-1536x673.png 1536w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/low-density-cloud-2048x897.png 2048w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/low-density-cloud-150x66.png 150w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>The blue rectangles represent where Metashape thinks a picture was taken while the point cloud should make an accurate impression of the object photographed. You can examine the cloud by rotating the transparent grey ball in the center of the window to rotate the scene and right-clicking to pan your view. Scrolling will affect the zoom. My bear came out quite nicely in this case, but I wanted to squeeze more information out of the photos. So, the next step was to build a dense cloud, which I could do under \u201cWorkflow\u201d.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-1654 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Desne-point-cloud-1024x562.png\" alt=\"\" width=\"1024\" height=\"562\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Desne-point-cloud-1024x562.png 1024w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Desne-point-cloud-300x165.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Desne-point-cloud-768x422.png 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Desne-point-cloud-1536x843.png 1536w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Desne-point-cloud-2048x1124.png 2048w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Desne-point-cloud-150x82.png 150w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>At this point, Metashape will most likely have a near-perfect representation of the object or scene you pictured. If you want to look at the results at any point without the cameras in the way, clicking and deselecting the camera Icon in the toolbar will hide the blue rectangles. After getting the dense point cloud I started to build the mesh. This option is, again, under \u201cWorkflow\u201d. You can choose to build the mesh from the sparse cloud or the dense cloud. There is also an option to control how many faces the mesh will have.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-1651 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Model-solid-1024x562.png\" alt=\"\" width=\"1024\" height=\"562\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Model-solid-1024x562.png 1024w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Model-solid-300x165.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Model-solid-768x422.png 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Model-solid-1536x843.png 1536w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Model-solid-2048x1124.png 2048w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/Model-solid-150x82.png 150w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>With the mesh of my bear, I can change the model view to \u201cModel Solid\u201d and see Metashape\u2019s imperfections. The hard canvas of the heart and the fabric of the clothes are depicted almost perfectly, but the fur is bumpy and sporadic instead of being soft, or smooth. Metashape does a great job translating hard, matte surfaces and amazingly can capture the form of things like bark or stonework in the meshes it generates. Where Metashape struggles are with thin strands, like hair or fur, and reflective or translucent materials. Sometimes the program gets confused and creates a texture where there shouldn\u2019t be one, such as on the floor in my example. However, Metashape has one more step that covers up the flaws of the model.<\/p>\n<h3>Creating a texture to wrap around the model.<\/h3>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-1656 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured1-1024x920.png\" alt=\"\" width=\"1024\" height=\"920\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured1-1024x920.png 1024w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured1-300x270.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured1-768x690.png 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured1-150x135.png 150w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured1.png 1055w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>When making a texture from \u201cWorkflow\u201d Metashape lets you choose how big the texture can be and defaults to a 4096 (4k) pixel square. The texture wraps around the model and creates the illusion of details that couldn\u2019t be represented in the model. This includes color, lighting, and the finer physical elements. My finished model is only bumpy where it needs to be and smooth everywhere else. The stitches of the canvas, the speckles in the fabric, and even the fur on the Teddy bear are all reproduced from the photographs. This digital teddy bear almost looks exactly like the one I photographed!<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-1657 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured2-1024x902.png\" alt=\"\" width=\"1024\" height=\"902\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured2-1024x902.png 1024w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured2-300x264.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured2-768x676.png 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured2-150x132.png 150w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured2.png 1079w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-1658 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured3.png\" alt=\"\" width=\"1000\" height=\"481\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured3.png 1000w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured3-300x144.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured3-768x369.png 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/UnevenTeddyTextured3-150x72.png 150w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/p>\n<p>All that\u2019s left to do is export the model, points, and\/or texture!<\/p>\n<p>Now, if this model was put into a different digital scene, the lighting in the texture\u2019s illusion of lighting may conflict with the lighting of a different digital environment, or perhaps the floor would troublesome to work with. The issues with the texture lighting have to be resolved at the start of taking photographs since Metashape uses the photos to build the texture for the model. Any unwanted aspects to the model can be removed in Agisoft at any part of the workflow.<\/p>\n<p>For my next model, I wanted to have an evenly lit texture. To do so, I would need to evenly light the Teddy. Traditionally, a cloudy day or studio lights set up with softboxes do a great job creating soft, even lighting for photographs or cinema. Without clouds or any lighting equipment, I tried my best with two desk lamps and my roommate\u2019s ring light.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-1661 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/PXL_20210324_155129841.jpg\" alt=\"\" width=\"1000\" height=\"786\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/PXL_20210324_155129841.jpg 1000w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/PXL_20210324_155129841-300x236.jpg 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/PXL_20210324_155129841-768x604.jpg 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/PXL_20210324_155129841-150x118.jpg 150w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/p>\n<p>I took my photos with a DSLR camera this time as well. Since the camera had a variable zoom lens, I had to be to keep the lens at the same zoom during the shoot. Lens distortion can mess up Metashape\u2019s measurements. I was given a pop reminding me of such when I uploaded my photos into Metashape.<\/p>\n<p>Once the photos were aligned and the point cloud was generated, I decided to get rid of all of the extra information I didn\u2019t want to keep. Points can be selected with any of the selection tools to then be cropped or deleted. I selected what I wanted to keep, then inverted the selection so I could delete everything I didn\u2019t want.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-1663 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/The-freeform-selection-tool.png\" alt=\"\" width=\"616\" height=\"367\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/The-freeform-selection-tool.png 616w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/The-freeform-selection-tool-300x179.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/The-freeform-selection-tool-150x89.png 150w\" sizes=\"auto, (max-width: 616px) 100vw, 616px\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-1664 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/selecting-and-inverting-1024x550.png\" alt=\"\" width=\"1024\" height=\"550\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/selecting-and-inverting-1024x550.png 1024w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/selecting-and-inverting-300x161.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/selecting-and-inverting-768x412.png 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/selecting-and-inverting-1536x825.png 1536w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/selecting-and-inverting-150x81.png 150w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/selecting-and-inverting.png 1965w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>I then made a dense point cloud, got rid of all of the extra information, made a mesh, and a texture. The resulting model was less than ideal. It was dark and, most importantly, the lighting was still pretty uneven.<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-1665 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy1-1.png\" alt=\"\" width=\"725\" height=\"723\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy1-1.png 725w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy1-1-300x300.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy1-1-150x150.png 150w\" sizes=\"auto, (max-width: 725px) 100vw, 725px\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-1666 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy2.png\" alt=\"\" width=\"917\" height=\"848\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy2.png 917w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy2-300x277.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy2-768x710.png 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy2-150x139.png 150w\" sizes=\"auto, (max-width: 917px) 100vw, 917px\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-1667 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy3.png\" alt=\"\" width=\"848\" height=\"869\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy3.png 848w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy3-293x300.png 293w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy3-768x787.png 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/EvenTeddy3-150x154.png 150w\" sizes=\"auto, (max-width: 848px) 100vw, 848px\" \/><\/p>\n<p>I did not give up, however, and tried using my phone camera again, this time with flash. Every photo I took with flash would light up whichever side my camera was facing. To Metashape, that\u2019s almost like having 120 different light sources for this one model.<\/p>\n<p>I took the photos, went through the Metashape workflow, and made my new model. It did not disappoint.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-1669 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy1.png\" alt=\"\" width=\"866\" height=\"805\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy1.png 866w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy1-300x279.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy1-768x714.png 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy1-150x139.png 150w\" sizes=\"auto, (max-width: 866px) 100vw, 866px\" \/> <img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-1670 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy2.png\" alt=\"\" width=\"873\" height=\"812\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy2.png 873w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy2-300x279.png 300w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy2-768x714.png 768w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy2-150x140.png 150w\" sizes=\"auto, (max-width: 873px) 100vw, 873px\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-1671 aligncenter\" src=\"http:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy3.png\" alt=\"\" width=\"700\" height=\"827\" srcset=\"https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy3.png 700w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy3-254x300.png 254w, https:\/\/sites.saic.edu\/aoc\/wp-content\/uploads\/sites\/68\/2021\/06\/FlashTeddy3-150x177.png 150w\" sizes=\"auto, (max-width: 700px) 100vw, 700px\" \/><\/p>\n<p>It\u2019s not perfect, I will admit. The model is a little messy underneath the texture. The texture it\u2019s self is much better than the other two, but there are errors and the occasional dark spots.\u00a0 However, given the easy access to equipment, straightforward process, and superb result, I was more than happy with Agisoft Metashape.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Written by: Jack Wilson If you ever find yourself wanting to make a 3D scan of an object but lacking any of the equipment necessary to do so, don\u2019t fret. With a photogrammetry program and a lot of patience, anyone can transform a normal collection of photos of an object into a 3D model ready &hellip; <a href=\"https:\/\/sites.saic.edu\/aoc\/agisoft-models-from-photos\/\">Continued<\/a><\/p>\n","protected":false},"author":165,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1620","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"acf":[],"_links":{"self":[{"href":"https:\/\/sites.saic.edu\/aoc\/wp-json\/wp\/v2\/posts\/1620","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sites.saic.edu\/aoc\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sites.saic.edu\/aoc\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sites.saic.edu\/aoc\/wp-json\/wp\/v2\/users\/165"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.saic.edu\/aoc\/wp-json\/wp\/v2\/comments?post=1620"}],"version-history":[{"count":11,"href":"https:\/\/sites.saic.edu\/aoc\/wp-json\/wp\/v2\/posts\/1620\/revisions"}],"predecessor-version":[{"id":1672,"href":"https:\/\/sites.saic.edu\/aoc\/wp-json\/wp\/v2\/posts\/1620\/revisions\/1672"}],"wp:attachment":[{"href":"https:\/\/sites.saic.edu\/aoc\/wp-json\/wp\/v2\/media?parent=1620"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sites.saic.edu\/aoc\/wp-json\/wp\/v2\/categories?post=1620"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sites.saic.edu\/aoc\/wp-json\/wp\/v2\/tags?post=1620"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}