Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
38 views
34 pages
Arcgis Reality
Uploaded by
Marina Moesia
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save arcgis reality For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
0 ratings
0% found this document useful (0 votes)
38 views
34 pages
Arcgis Reality
Uploaded by
Marina Moesia
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save arcgis reality For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save arcgis reality For Later
You are on page 1
/ 34
Search
Fullscreen
(eT Tae isms Lo} Seca Suey mesa Mer High resolution imagery, collected using drones, aircraft, and satellites, is increasingly accessible and can be an important resource for developing foundational data layers for mapping and creating digital twins. In this tutorial, you'll use aerial images, position data, and ground control points to create 3D derived imagery products using ArcGIS Reality Studio. ‘As a geospatial strategist working for the city of Frankfurt, you have been asked to provide a 3D mesh for the city center to support decision makers as they plan, build, maintain infrastructure, and conduct emergency operations. For that purpose, the city has tasked an aerial survey company to collect aerial images to and to provide accurate GNSS position information per image. First, you'll create a capture session, which contains the images collected in one flight. Next, you'll perform an image alignment to refine the initial positioning information per image and correct distortion. During the alignment process, you'll use tie points and ground control points to refine the image referencing using updated measurements. Then, you'll generate a point cloud and a textured mesh city model To perform these tasks, ArcGIS Reality Studio requires the following: * Information about the camera, including focal length and sensor size, as well as the location at which each image was captured. This information is typically shared by the airborne service provider in form of a camera protocol put together by the sensor manufacturer. * Information about the positioning information per image, as they result from GPS/INS post-processing of the flight trajectory. This information is commonly shared in form of an ASCII file including image name, image position (XY,Z) and image rotation (omega, phi, kappa). * Information about the horizontal and vertical reference coordinate system of the input data. * Optionally, ground control points * Optionally, 3D water body geometry (a 3D vector shape representing water areas, used to flatten water during the reconstruction process) and a region of interest. The duration of this tutorial is approximately 1 hour and 30 minutes to work through the steps. However, additional processing time is required. This time depends greatly on your computer resources. Processing this data required 8 hours on a system with 128 GB RAM, AMD Ryzen 24 core CPU @3.8 GHz, and Nvidia GeForce RTx4090 GPU. This tutorial was last tested on June 23, 2023. VIEW FINAL RESULT Requirements* ArcGIS Reality Studio * ArcGIS Reality Desktop license * ArcGIS Coordinate Systems Data (installed separately from ArcGIS Pro) * Windows 10 or 11 operating system, with a minimum of 64 GB RAM Outline Create a capture session 1 Import the images, positioning information, and camera specifications, minutes Perform an ali¢ lignment 60 Align the images and refine the alignment using aerotriangulation minutes 15 Create a reconstruction of the scene from the images and generate a textured 3D mesh and minutes point cloud. Create a capture session A capture session combines all relevant information captured in a single photo-flight that will be required to do the alignment and reconstruction steps. Capture sessions may be built for imagery captured with nadir-only sensors or multihead sensor systems and with the corresponding positioning information for each image. In a nadir sensor system, the sensor points straight down and captures imagery of the surface under it. The images collected this way are referred to as nadir images. The following image is an example of nadir imagery: The following image is a diagram of a nadir camera cone and image footprint.In a multihead sensor system, sensors point in multiple directions, at angles forward and backward and to the sides, The images collected at an angle are referred to as oblique images. Multihead systems may also include a sensor to collect nadir images. The following image is an example of oblique imagery. The following image is a diagram of a multihead sensor, showing the camera cones and image footprintsPositioning information may be based on navigation information or high accuracy positions derived in an external aerotriangulation process. The data you'll add to your capture session consists of the following * 873 images captured with a multihead sensor system (IGI UrbanMapper) * An ASCII file including the positioning information per image (GNSS_IMU_whole_Area.csv) * A file including the necessary sensor specifications (Camera_template_Frankfurt_UM1 json) * A file geodatabase containing the geometry for region of interest and a water body (AOLand_Waterbody.gdb) Download the data The data for this tutorial takes up about 26 GB of disk space. 1. Download the Frankfurt City Collection zip file Note: Depending on your connection speed, this 26 GB file may take a long time to download. 2. Extract the zip file to a folder on your local machine, for example D:\Datasets\Frankfurt_City Collection Start a capture session Next, you'll create the capture session 1. Start ArcGIS Reality Studio, 2. On the Welcome screen, click New Capture Session.ArcG | bs) iorliimsi elo Oraaiead ee esas nce eee eet 3. In the Capture Session pane, for Capture Session Name, type Frankfurt Flight RS. 4, For Orientation File Format, click ASCII Text File (.txt, .csv, etc) er ee ey eas Cenex Tee oor) A notice appears that the data must be in a supported orientation data format convention. 5. For Orientation File Path, browse to the Frankfurt_City Collection folder that you extracted. Select GNSS_IMU_whole_Area.csv and click OK.Frankfurt City Collection fend roundContralP. a) ES Secs 7.[n the Spatial Reference window, for Current XY, in the search box, type 25832 and press Enter. Sec lt rae eras eres eres Vee ‘The search for this Well Known ID (WKID) code returns the ETRS 1989 UTM Zone 32N coordinate system. This is the XY coordinate system used in the position file. 8. In the list of results, click ETRS 1989 UTM Zone 32N. You've set the XY coordinate system. Next, you'll set the Z coordinate system, 9. Click Current Z. Sere ¢ avaliable options Current Z10. For Current Z, in the search box, type 7837 and press Enter. ETRS 1989 UTM Zone 32N Doc ae rae ere Cet 11. In the list of results, click DHHN2016 height. You've set the Z coordinate system. 12, In the Spatial Reference window, click OK. 13. In the Data Parsing section, for Parse from row, type 22 and press Enter. Des cae) Ea ees Ne een? POs tye Peeiaicd eee ee The GNSS_IMU_whole_Area.esy orientation file that you imported is a comma delimited text file. It includes a header section of 21 lines, while the data that ArcGIS Reality Studio will use to process the images begins at line 22. Entering 22 in this box skips the header rows. Note:Another way to skip the header is to specify the character that begins comment rows. In this file, the # symbol is the comment character, so you could also skip the header by typing # in the Symbols used to ignore rows box. Once ArcGIS Reality Studio can read the file correctly, the number of detected orientations is listed in a green highlight box. In this case, 7,775 orientations are detected. These are the orientations collected during the flight. This is greater than the 873 images used in the tutorial because the tutorial images are a subset of a larger collection. 14, Click Next. Define the parameters of the orientation file There are multiple image orientation systems and they label the collected parameter data in different ways. In this, case, the GNSS_IMU_whole_Area.csv file you imported contains the image name, X. ¥, Z, Omega, Phi, and Kappa values in the same order as they appear on the Data Labeling table. You'll match the fields to the data positions in the file, 1. In the Data Labeling section, for Image Name, choose the first item in the list Peelers) ees ay ae eres Place 1 in the file contains data that consists of a code value separated by underscore characters. 2. For X, choose the second item in the listData Labeli So ern Place 2 in the file contains data that consists of floating point data, You'll continue mapping the field names to places in the data file. 3. For Y, choose the third item in the list. 4. For Z, choose the fourth item in the lis. 5. For Omega, choose the fifth item in the list. 6. For Phi, choose the sixth item in the list 7. For Kappa, choose the seventh item in the ist. eee) ter When you have set the Kappa value, in the Camera System Assignment section, a green box appears with the number of assigned orientations from the file. 8, Skip the Camera Name field Relate the orientation data to the imagesThe orientation data file contains information that ArcGIS Reality Studio will use in reconstructing the scene. There are multiple camera and orientation tracking systems. The relationship between the position data and the cameras is established in different ways, depending on the convention used by the system used to collect your images. The following are the two main ways: * The ASCII orientation file may include a column with the camera names. * The image file name includes a string that identifies the camera, In this tutorial, the image file names contain a string to identify the camera 1.In the Camera System Assignment section, click the options button and choose Import Template. ESE on faces Sena ea ‘The Camera System Assignment section updates to include a table for the camera names and ID values eeu ea ee fro Backward fd 3. Click the Camera_Name row and click Delete.Seen) Nam eee ea ‘Camera_Name is the default entry and is not needed now that the table of camera names is populated Next, you'll enter the codes that correspond to the cameras in the image file names. 4. For Leff, in the Camera ID column, type the code _11000. 5. For Forward, in the Camera ID column, type the code 11900, 6. For Na in the Camera ID column, type the code _NAD. 7. For Backward, in the Camera ID column, type the code 11600, 8. For Right, in the Camera ID column, type the code _11100. ‘The Camera System Assignment table now matches the camera names to the Camera ID codes embedded in the image file names. ‘The Capture Session Selection section appears below the Camera System Assignment table. fro eThis section allows you to choose to process specific camera sessions or all camera sessions. In this tutorial, you'll process all of the camera sessions. 9, Click CaptureSession to select all of the capture sessions, fo erate a ‘The waming icons beside each capture session indicate that images are not yet linked up with the orientation data. You'll fix that issue after the capture session has been created, 10. Click Next. Review the camera sessions The Camera Sessions section allows you to review the parameters of the cameras used to capture the images. 1. In the Camera Sessions section, click Forward Frankfurt Flight RS.eee eae Pee Cee eS eae ea ere eee ru eae a ee oo The next sections contain information about the camera used to collect the forward looking images. This information was included in the Camera_template_Frankfurt_UM1 json file that you imported earlier. 2, Scroll down to see the data in the Sensor Definition section.Sac ae a eet ene CES Origin Convention ere ended Ces Xmm Ynm |-0.142 fens een Each of the camera sessions listed has a corresponding table of data documenting the physical properties of the ca ymera and lens system used to capture that set of images. Note: If the camera data had not been imported from the Camera_template_Frankfurt_UM1json file, you could manually enter the data from your imagery provider. 3. Optionally, click the other camera sessions and review their parameters 4, Click Finish, ‘The capture session is constructed. This process will take a minute or so. The Project Tree pane appears.a) poe ee Pale ae Peale ae Add image: eer eer Eee ea perraees Footprint Computation - Frankfurt Flight RS The globe view appears, showing the locations of the camera captures.ns to the ii Link capture ses: age files Next, you'll connect the capture sessions you've selected to the image file data location 1. In the Project Tree pane, look at the entry for Forward_Frankfurt Flight RS. Peat Dee yes eres rey ae tag The number of images is listed as 0. You'll connect the image data to the forward looking images. 2. In the Project Tree pane, in the Forward_Frankfurt Flight RS section, click Add images.Fe Visualization eer 4 Ie Project ae ished or failed (1) Feet ee een ac fo ee When the process is complete, Forward_Frankfurt Flight RS shows 160 images.Now, you'll add the images to the next capture session. 4, In the Project Tree pane, in the Nadir_Frankfurt_Flight_RS section, click Add images. Pee ee aa ea er aa agi cerry eee Giza You'll repeat this process for each of the camera capture sessions 6. In the Project Tree pane, in the Backward Frankfurt Flight RS section, click Add images. 7. In the Select images, folders or list files window, select the jpg folder and click OK. 8, In the Project Tree pane, in the Right Frankfurt Flight RS section, click Add images. 9. In the Select images, folders or list files window, select the jpg folder and click OK. 10. In the Project Tree pane, in the Left Frankfurt Flight RS s ion, click Add images. 11. In the Select images, folders or list files window, select the jpg folder and click OK.After the capture sessions have been linked to their images, you can visualize the image footprints. 12. In the Project Tree pane, click Visualization Deen asa a eye Tes mera: Forward Dea ae ee keer Cees ea ears ea ened rer (okenecied 13. In the Forward_Frankfurt_Flight_RS section, check Image Footprints.De Tacs Se aa ee The image footprints are shown in the globe view 14, Uncheck Image Footprints. Define the region of interest and add water bodies The last two steps before align the images is to define the region of interest for the project and identify where water bodies are located 1. On the ribbon, on the Home tab, in the Import section, click Geometries and choose Region of Interest,2. In the Select a region of interest window, in the Computer section, browse to the Frankfurt ity Collection folder Frankfurt City Collection 3. Double-click the AOL_and_Waterbody.gdb geodatabase to expand it. lick the Frankfurt_AOI feature class and click OK. Cor Ree re ra ee Pe ya) fea [omer a ‘The Frankfort_AOI polygon feature class is added to the globe view.Specifying a region of interest geometry prevents unnecessary data from being processed, minimizing total processing time and storage requirements. 4, On the ribbon, on the Home tab, in the Import section, click Geometries and click Water Body. 5. In the Select a water body geometry window, in AOI_and_Waterbody.gdb, click Frankfurt waterbody and click OK. ‘The Frankfurt_waterbody polygon feature class is added to the globe view. Specifying water body geometries flattens and simplifies areas within water bodies. These can be tricky to process and lead to undesirable outputs due to the reflective nature of water. The capture session has been fully defined. You can now save the project 6. On the ribbon, click Save Project. 7. In the Save Project As window, browse to a location with plenty of free disk space, type 2023- Frankfurt Reality Studio Tutorial, and click Save,You have defined the capture sessions, set the coordinate system and camera properties, and linked the position and orientation data to the captured images, and saved the project. You are now ready to begin adjusting the images to start creating products from them. Perform an alignment The capture session was built from GNSS navigation data recorded during the photo flight. This exterior orientation information is typically not accurate enough to create products such as true orthos or 3D meshes of high geometric quality. To optimize the navigation data, you'll run an alignment. During alignment, also called aerotriangulation, individual images are connected by determining homologous points (tie points) between overlapping images. With many of these image measurements, the image block can be mathematically adjusted to refine the orientation parameters for each image. Additional accuracy can be obtained by manually measuring ground control points. Create an alignment To align the images, you must add an alignment to the project. 1. On the ribbon, on the Home tab, in the Processing section, click New Alignment. In the Alignment pane, for Alignment Name, type Frankfurt AT. 3. In the Camera Sessions section, check Dataset. This alignment will use all the capture sessions, so they should all be checked. 4, In the Control Points section, click Import Control Points 5. In the Select input window, browse to the Frankfurt_City Collection folder and open the GroundControlPoints folder. Select Ground_Control_Points.txt and click OK, 6.In the Control Points Import window, click the Spatial Reference browse button. 7.In the XY Coordinate Systems Available box, type 25832 and press Enter. 8, Click ETRS 1989 UTM Zone 32N. Click the Current Z box. In the Z Coordinate Systems Available box, type 7837 and press Enter.10. Click DHHNN2016 height and click OK. 11. For Choose a delimiter, accept the default delimiter, comma, 12. Click Next. 13. Review the column labels. ‘The default values are correct. 14, Click Import. The control points are added to the globe view, 15. In the Alignment pane, in the Control Points section, check Dataset. Expand Dataset to see the new Ground_Control_Points data ‘The Standard Devi ns section allows you to modify the given accuracy (a-priori standard deviations) of the image positions (XYZ position and rotation angles) and of the imported ground control points. For this tutorial, the default values are correct. ‘The Region of Interest parameter allows you to specify a region to adjust. For this tutorial, you'll perform. alignment on the entire dataset, so there is no need to set a region of interest. 16, Click Create, Clicking Create adds the Alignment tab to the ribbon. The alignment is ready to run. Running the alignment will start the automatic tie point matching and bundle block adjustment process. This is a computationally intensive process, and the duration of the processing will depend on your computer hardware, On a computer with 128 GB RAM, AMD Ryzen 24 core CPU @3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 2 hours. 17. Click Run, In the Process Manager pane, the Alignment process status appears. 18. Expand the Alignment process to see the steps.‘The Process Manager allows you to keep track of the stages of the Alignment process and their status This might be a good time to take a break or work on something else while the process runs. When the process finishes, you can see it listed in the Process Manager pane. Once the alignment finishes, the QA window appears. This window shows the key statistics of the bundle block adjustment, Measure ground control points You can measure ground control points before or after the initial alignment, Doing it after the alignment has the benefit that the software has already refined the image positions and can provide a better indication of where to measure, 1. In the QA window, on the Overview tab, scroll down and expand Count. ‘The Image Measurements column for the Ground Control P ts row indicates that no image measurements have been done for the ground control points. You will add some now. 2. Optionally, close the QA window. 3. On the ribbon, on the Alignment tab, in the Tools section, click Image Measurements. ‘The measurement window appears. The left pane shows a globe view of the project area and a Control Points table with the available ground control points, Note: If the Alignment tab is not visible, in the Project Tree pane, scroll down to the Al and click Frankfurt AT. nments section The Image pane shows a set of image measuring tool instructions. 4, Review the information. When finished, close the information window. 5. Ifthe first row in the Control Points table is not highlighted, click the first row.When you click the first row, the Image List section updates to show all of the images that see the point. 6. Click the first image in the Image List section. The Image pane shows the first image of that list with a pink circle indicating the potential location of the ground control point. This image is not good for measuring ground control points, because the point is hidden by the tree canopy. 7. Press F to move to the next image. This image shows the ground control point in the image, surrounded by the Projected point symbol ‘To measure a selected point, place the pointer on the location of the marked ground control point as seen by the image sensor in the image and click to measure. The measurement will be added to the Image window as a green cross, 8, Use the scroll wheel on your mouse to zoom closer to the ground control point. Click the center of the point. ‘The new measured point location is added. 9, Use the scroll wheel on your mouse to zoom out. 10. Under Image List, click the next image. ‘The next image showing this ground control point opens in the Image window. Note: You can also press the Spacebar to accept the measurement and move to the next image in the list. The next ground control point is difficult to distinguish from the pavement, 11. Press F to move to the next image. ‘The next image has a clear ground control point. 12. In the Image window, click the ground control point. ‘The measured point is added. 13, Press the Spacebar to accept the measurement and move to the next image in the list.14, 15, 16. After you added the second measured point, a new symbol is added to the next image. The red square bracketed point represents a suggested point. A suggested point represents the calculated location of the ground control point based on the previous two measurements. Press the Spacebar to accept the suggested point. Proceed through the remaining images using the following instructions: © Ifthe ground control point is not visible in the image (for example, ifit is hidden by a parked car or tree), press F to skip the image. © Ifthe suggested point location appears correct, press the Spacebar to confirm it as a valid measurement. © If the suggested point location does not appear correct, click the location of the ground control point. The image list can be expanded to show the reprojection error associated with each measured point. The Use column has check marks to indicate the measured ground control points. If you make a mistake, you can uncheck a point and not use it. In the following example image, one of the points has much higher Reprojection Error values than the other ones. You can click the row and go back to examine the image and re-collect the measurement, or you can discard this point. Uncheck the Use box for the bad point. Unchecking the point removes the point and clears the Reprojection Error values. Once all image measurements are collected for a given ground control point, ArcGIS Reality Studio automatically moves forward to the next ground control point in the Control Points lst. To measure a different ground control point, click the row header for the point in the Control Points table on the lower left side of the Image measurement window. Doing so opens a new set of images in the image list and shows the first image on the list in the Image pane.17. Close the Image Measurements window and begin measuring ground control points. Use the same method to collect measurements for at least five of the other Control Points covering the area of interest, specified by the Frankfurt AOI geometry. Note: Some points, such as point 990007, were not clearly marked on the ground by a point but were collected at a visually distinguishable location, such as a corner of a crosswalk. In the Frankfurt City Collection folder, the GroundControlPoints folder contains a set of images showing a green Measured Point marker at the location of the ground control point. If you open the 990007 image file in this folder, you'll see that this ground control point was collected at the comer of a crosswalk. For each ground control point, view the corresponding image in this folder to verify the location before measuring, When conspicuous existing locations are used as ground control points, the surveyor usually notes the location in a set of field notes and takes a picture showing the GPS antenna at that location. The images in this folder simulate that sort of field data, Note: For best results, measure the location about five times for each sensor view (Left, Right, Forward, Backward, Nadir) for each of the ground control points. This will ensure that the views are correctly connected. The image list includes a column that identifies the camera for each image. After you have measured a set of images for a ground control point, review the image list for images that have high Reprojection Error values. Uncheck the Use box for these images. ‘As you work through each of the ground control points in the Control Points table, the Status column will will change from an open circle to a half-filled circle for the control points than have been measured. When all the control points have been measured, the Status field will show half-flled circles for each row and the Reprojection Error statistics for each control point will be visible If some control points have higher Reprojection Error statistics than others, you can click the header for the row in the Control Points table, and then in the Image List, search for and re-measure or remove images with high Reprojection Error values. Once you are satisfied with the control point measurements, you can change the role of a control point to a check point, to use for statistical reporting.Change a ground control point to a check point Check points are used for evaluating and reporting on the accuracy of the alignment. Their 3D position and image residuals are estimated using the output image orientation for quality assurance purposes. You'll convert one of the ground control points to a check point. 1. In the Control Points table, click the header for the fourth row (ground control point 990006). The row for this ground control point is highlighted. 2. In the toolbar at the top of the Control Points table, click Set Role and choose CP. In the table, the role changes to CP. Refine the alignment After adding and measuring control points, or changing other settings of the alignment, you will run the alignment again to refine the positions based on the new information. This re-runs the bundle-block adjustment, but it will be much faster than the initial alignment process. 1. On the ribbon, on the Alignment tab, in the Process section, click Run. The Process Manager opens and shows the progress on the alignment process. After one or two minutes, the process completes. ‘The QA tool pane appears. To check the quality of the alignment results, examine the statistics on the QA tool pane For best results with this data, keep the following in mind: © The overall Sigma 0 value should be less than 1 px for a well calibrated photogrammetric camera. © The RMS of the tie point reprojection error, which is also expected to be less than 1 px. © The RMS for the horizontal and vertical object residuals for control points should be less than 1.5, GSD (12 em), Also check the count data, such as the number of automatic tie points per image and image measurements per tie point, which indicate how well tie points are distributed in the project area and how well adjacent images are connected by a common measurement. You can also review the tie point visualization in the globe view.Note: These steps are meant to give you basic guidance for analyzing the alignment results, Doing an in- depth analysis of the quality requires knowledge about project requirements and specifications as well as knowledge about the quality of the input data, 2. In the QA pane, click General Information and view the Sigma 0 value. ‘The value in this example is 0.7559, which is a good value for this dataset. 3. On the right side of the QA pane, scroll down to the Reprojection Errors section and view the Automatic Tie Points Reprojection Errors section. The RMS value for the tie point reprojection errors in this example is 0.756, which is a good value for this dataset 4. On the right side of the QA pane, scroll up to the 3D Residuals section. View the Ground Control Points Residuals section. ‘The RMS value for the Ground Control Points Residuals in this example is 0.151 meters, perhaps a little higher than the desired value of 0.12 meters but acceptable for this exercise. 5. On the left side of the QA pane, scroll down and expand the Count section, In this example, there are six ground control points with 390 image measurements and one check point with 30 image measurements. 6. Optionally, review the other QA statistics and measurements. 7. On the QA tool, click the Control Points tab, ‘The Control Points table appears. You can use this table to check the XY and Z residuals for each control point. Unexpectedly large values may indicate points that may have to be re-measured. This might be the case if they were incorrectly measured, Unexpected large Delta XYZ values compared properly measured control points are an indication that this is the problem You can also review the geography of the actual project data (ground control points, automatic tie points, image positions)8. On the ribbon, on the Alignment tab, in the Display section, click Automatic Tie Points. ‘The automatic tie points are drawn in the Display pane. ie Points drop-down arrow and choose RMS of Reprojection Errors. The Display pane updates to show the manual tie points symbolized by the RMS of the reprojection errors. 10. On the QA tool, click the Automatic Tie Points tab. ‘The table shows the automatic tie points. You can view and sort the data in this table to identify the automatic tie points with the highest error values. 11. On the QA tool, click the Overview tab. On the right side, scroll to the Reprojection Errors section and view the Automatic Tie Points Reprojection Errors histogram. The symbology of the histogram matches the symbology of the globe view. 12. On the ribbon, on the Alignment tab, in the Results section, click Report. 13. In the Create Alignment Report window, browse to a location to save the report. For Name, type Frankfurt_AT_report. 14, Click Save. ‘The PDF is saved on your computer. It is a way to share the QA statistics of the alignment. 15. Close the QA tool and save the project. You've performed an initial alignment, added control points, refined the alignment, and examined the alignment statistics. You also exported a PDF copy of the alignment statistics to document your work, and share with your stakeholders, Next, you'll use the aligned data to create a reconstruction.Perform a reconstruction Now that the alignment process is complete and the results have been examined and determined to be high quality, you are ready to create output products. For this tutorial, you'll create a 3D point cloud and a 3D mesh. Create a reconstruction The first step to generate the products is to create a reconstruction. 1. On the ribbon, click the Home tab. In the Processing section, click New Reconstruction. 2. In the Reconstruction pane, for Reconstruction Name, type Frankfurt RS_3D. This reconstruction session will be used to create two 3D outputs 3. For Capture Scenario, click the drop-down list and choose Aerial Oblique. Choosing a scenario sets some output products and processing settings. The Aerial Oblique setting is useful now because the sample data is a multihead capture session, and all the available imagery will be used to create the output 3D products, The Aerial Nadir setting is more useful when you are creating 2D products. I limits processing to the nadir images. 4.In the Camera Session section, check the Frankfurt_AT alignment session that you created. The alignment is selected. 5. In the Products section, review the output products. The Point Cloud and Mesh products are highlighted. ‘The OSGB and SLPK mesh formats will be exported by default. You can check other formats for the output mesh if you choose. 6.In the Optional section, for Quality, click Ultra. ‘The Ultra setting will run the 3D reconstruction at the native image resolution. This will take a longer time to process than the High quality option, but the results will look better. On a computer with 128 GB RAM,AMD Ryzen 24 core CPU @3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately B hours. You can choose the High quality option if you want the output to have slightly reduced detail and lower texture resolution. 7. For Region of Interest, choose Frankfurt_AOI. Region of Interest allows you to limit processing for your output products to the images relevant to your project. 8. For the Water Body Geometries, choose Frankfurt_waterbody. ‘The Water Body Geometries parameter is used to flatten and simplify areas within water bodies. These can be tricky to process and lead to undesirable outputs due to the reflective nature of water. 9. For Correction Geometries, accept the default value of None. 10. Click Create. ‘This finishes the Reconstruction set up. The reconstruction is added to the Project Tree pane, Run the reconstruction Now that the reconstruction has been set up, the next step is to run it. This will take some time, depending on your computer resources. On a computer with 128 GB RAM, AMD Ryzen 24 core CPU @3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 8 hours. 1. On the ribbon, on the Reconstruction tab, in the Processing section, click Run. ‘The Process Manager pane opens and shows the status of the reconstruction process. Note: The progress bar for the analysis step will start to run after 10 minutes. ‘After the analysis step is finished, the globe view will show the processing progress as well. You can observe the individual stereo models being processed in dense matching, Later in the process, you'll see individual tiles of the point cloud and the mesh added to the globe view.Once the process has finished, the products are added to the Project Tree pane. You can use the Visualization tab to show or hide these products. 2. Wait for the reconstruction process to run. 3. On the ribbon, on the Reconstruction tab, click Open Results Folder. This opens the Results folder in Microsoft File Explorer. It contains the 3D point cloud in LAS and i3s (SLPK) format, as well as the 3D mesh in OSGB and i3s (SLPK) format. Use the .sipk files to add the products to ArcGIS Online 4. Ifyou do not run the process, view the results In this tutorial, you have created an ArcGIS Reality Studio project, added a capture session, performed an initial alignment, measured ground control points, and refined the alignment. You evaluated the quality of the alignment and determined that it was acceptable. You used the alignment to create a reconstruction, and you used that reconstruction to create point cloud and 3D mesh outputs. These can be shared to ArcGIS Online or used with local applications on your computer. You can use a similar process in the reconstruction stage to create 2D products such as true orthophotos and digital surface models. The main difference for creating 2D outputs is that you would use the aerial nadir scenario and limit the camera session to nadir camera captures. You can find more tutorials in the tutorial gallery. Acknowledgements ‘+ Frankfurt digital aerial imagery: Nadir and oblique images captured and provided by AeroWest GmbH. ‘+ Dark Basemap: Airbus,USGS.NGA.NASA CGIAR.NLS,OS,NMA,Geodatastyrelsen,GSA,GSI and the GIS User ‘Community, HVBG, Esri, HERE, Garmin, Foursquare, GeoTechnologies, Inc, MET/NASA, USGS. Send Us Feedback Please send us your feedback regarding this tutorial. Tell us what you liked as well as what you didn't. If something in the tutorial didn’t work, let us know what it was and where in the tutorial you encountered it (the section name and step number). Use this form to send us feedback. Share and repurpose this tutorial Sharing and reusing these tutorials are encouraged. This tutorial is governed by a Creative Commons license (CCBY-SA-NC), See the Terms of Use page for details about adapting this tutorial for your use
You might also like
LMB Technical Bulletin No. 2 Series of 2017
PDF
100% (1)
LMB Technical Bulletin No. 2 Series of 2017
34 pages
UAV Image Processing Using APS Menci PDF
PDF
100% (1)
UAV Image Processing Using APS Menci PDF
38 pages
ERDAS FieldGuide
PDF
No ratings yet
ERDAS FieldGuide
770 pages
Pix4D Introduction Guide
PDF
100% (1)
Pix4D Introduction Guide
42 pages
PS - 1.0.0 - Tutorial (BL) - Orthophoto, DeM (With GCP)
PDF
100% (1)
PS - 1.0.0 - Tutorial (BL) - Orthophoto, DeM (With GCP)
14 pages
Matthews 2008 Aerial Close Rage Photogrammetry
PDF
100% (1)
Matthews 2008 Aerial Close Rage Photogrammetry
62 pages
Terrasolid
PDF
No ratings yet
Terrasolid
105 pages
EZSurv UserGuide
PDF
No ratings yet
EZSurv UserGuide
300 pages
TrimbleRealWorks 6.5 User Guide - ALL MODULES PDF
PDF
No ratings yet
TrimbleRealWorks 6.5 User Guide - ALL MODULES PDF
912 pages
USGS PhotoScan Workflow PDF
PDF
No ratings yet
USGS PhotoScan Workflow PDF
1 page
Erdas
PDF
No ratings yet
Erdas
770 pages
Import Export Erdas
PDF
No ratings yet
Import Export Erdas
226 pages
Stereo Analyst
PDF
No ratings yet
Stereo Analyst
312 pages
Leica Photogrammetry Suite2
PDF
No ratings yet
Leica Photogrammetry Suite2
33 pages
Final Traning - Manual
PDF
No ratings yet
Final Traning - Manual
18 pages
Georeferencing A Map
PDF
100% (1)
Georeferencing A Map
12 pages
PSC Photogrammetry
PDF
No ratings yet
PSC Photogrammetry
110 pages
Report Aerial Photograph Image Processing
PDF
No ratings yet
Report Aerial Photograph Image Processing
10 pages
2nd Lecture AERIAL PHOTO 1 16102024 055646pm
PDF
No ratings yet
2nd Lecture AERIAL PHOTO 1 16102024 055646pm
54 pages
Manual Imagine Uav v1!12!05-2015
PDF
No ratings yet
Manual Imagine Uav v1!12!05-2015
41 pages
Report 1
PDF
No ratings yet
Report 1
9 pages
PSC Surveyor Photogrammetry
PDF
No ratings yet
PSC Surveyor Photogrammetry
89 pages
Introduction To ArcGIS Excalibur
PDF
No ratings yet
Introduction To ArcGIS Excalibur
1 page
Book
PDF
No ratings yet
Book
217 pages
Photo Scan Processing Mica Sense Mar 2017
PDF
No ratings yet
Photo Scan Processing Mica Sense Mar 2017
21 pages
Photogrammetric Mapping Process Overview
PDF
No ratings yet
Photogrammetric Mapping Process Overview
12 pages
Summit 3D Setup
PDF
No ratings yet
Summit 3D Setup
12 pages
ArcGIS Data Interoperability Workshop
PDF
No ratings yet
ArcGIS Data Interoperability Workshop
38 pages
APEW Toc
PDF
No ratings yet
APEW Toc
9 pages
PhotoScan Processing Procedures MicaSense Feb 2016
PDF
No ratings yet
PhotoScan Processing Procedures MicaSense Feb 2016
18 pages
Processing UAS Photogrammetric Images in Agisoft Photoscan Professional 1528746327
PDF
No ratings yet
Processing UAS Photogrammetric Images in Agisoft Photoscan Professional 1528746327
47 pages
Pix4Dmapper Feature List v4.6 English
PDF
No ratings yet
Pix4Dmapper Feature List v4.6 English
3 pages
AnneHillyer Georeferencing
PDF
No ratings yet
AnneHillyer Georeferencing
14 pages
Emma Prac1
PDF
No ratings yet
Emma Prac1
29 pages
Metashape Digital Aerial 2021
PDF
No ratings yet
Metashape Digital Aerial 2021
14 pages
Badawy
PDF
No ratings yet
Badawy
22 pages
Dronedeploy 2019 Whitepaper v5 PDF
PDF
No ratings yet
Dronedeploy 2019 Whitepaper v5 PDF
51 pages
LiMapperUserGuide en
PDF
No ratings yet
LiMapperUserGuide en
88 pages
Digital Photogrammetry An Introduction
PDF
No ratings yet
Digital Photogrammetry An Introduction
4 pages
Lec 22
PDF
No ratings yet
Lec 22
12 pages
Work Flow: Mapping and Aerial Image Processing Software
PDF
No ratings yet
Work Flow: Mapping and Aerial Image Processing Software
20 pages
Aerial Data Processing (With GCPS) - Orthomosaic&DEM Generation - Helpdesk Portal
PDF
No ratings yet
Aerial Data Processing (With GCPS) - Orthomosaic&DEM Generation - Helpdesk Portal
10 pages
T Started With ArcGIS Drone2Map - Learn ArcGIS
PDF
No ratings yet
T Started With ArcGIS Drone2Map - Learn ArcGIS
22 pages
Informe Tarea 3
PDF
No ratings yet
Informe Tarea 3
10 pages
Get Started With ArcGIS Pro
PDF
No ratings yet
Get Started With ArcGIS Pro
36 pages
Image Processing Report
PDF
No ratings yet
Image Processing Report
5 pages
PS - 1.2 - Tutorial (BL) - Orthophoto, DeM (Without GCPS)
PDF
No ratings yet
PS - 1.2 - Tutorial (BL) - Orthophoto, DeM (Without GCPS)
14 pages
ArcGIS API For Python
PDF
No ratings yet
ArcGIS API For Python
5 pages
Workflow Agisoft Photoscan Micasense
PDF
No ratings yet
Workflow Agisoft Photoscan Micasense
17 pages
Software Workflow Agisoft Photoscan Pro 0.9.0: For Use With Gatewing X100 Uas
PDF
No ratings yet
Software Workflow Agisoft Photoscan Pro 0.9.0: For Use With Gatewing X100 Uas
20 pages
Survey 123
PDF
No ratings yet
Survey 123
38 pages
PhotoScan Processing Procedures DSLR Feb 2016
PDF
No ratings yet
PhotoScan Processing Procedures DSLR Feb 2016
17 pages
Photoscan 1.2 Ortho Dem Tutorial
PDF
No ratings yet
Photoscan 1.2 Ortho Dem Tutorial
14 pages
PS - 1.3 - Tutorial (BL) - Orthophoto, DeM (Without GCPS)
PDF
No ratings yet
PS - 1.3 - Tutorial (BL) - Orthophoto, DeM (Without GCPS)
14 pages
The Integration of GPS, Expertgps, Excel, and Arcgis in Geological Mapping and Data Mining: Google Earth To Arcgis To Google Earth
PDF
No ratings yet
The Integration of GPS, Expertgps, Excel, and Arcgis in Geological Mapping and Data Mining: Google Earth To Arcgis To Google Earth
13 pages
PS - 1.0.0 - Tutorial (IL) - Orthophoto - DEM
PDF
No ratings yet
PS - 1.0.0 - Tutorial (IL) - Orthophoto - DEM
13 pages
Air Photo SE Tutorial
PDF
No ratings yet
Air Photo SE Tutorial
9 pages
ArcGIS Deep Learning Studio
PDF
No ratings yet
ArcGIS Deep Learning Studio
4 pages
Quickstart Arcgis Enterprise
PDF
No ratings yet
Quickstart Arcgis Enterprise
12 pages
(En) APS Features - Ver7.5.0
PDF
No ratings yet
(En) APS Features - Ver7.5.0
2 pages
ArcGIS Monitor Linux Documentantion
PDF
No ratings yet
ArcGIS Monitor Linux Documentantion
3 pages
Introduction To Indoor Viewer
PDF
No ratings yet
Introduction To Indoor Viewer
2 pages
ArcGIS Maps SDK For Java
PDF
No ratings yet
ArcGIS Maps SDK For Java
5 pages
Introducing ArcGIS Living Atlas Live Feeds Status
PDF
No ratings yet
Introducing ArcGIS Living Atlas Live Feeds Status
6 pages
ArcGIS Survey123 XLSForm Essentials
PDF
No ratings yet
ArcGIS Survey123 XLSForm Essentials
6 pages
(En) APS Features
PDF
No ratings yet
(En) APS Features
2 pages