Citation link:
https://nbn-resolving.org/urn:nbn:de:hbz:467-12787
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yao, Wei | - |
dc.date.accessioned | 2019-09-02T10:04:49Z | - |
dc.date.available | 2018-02-6T12:12:12Z | - |
dc.date.available | 2019-09-02T10:04:49Z | - |
dc.date.issued | 2017 | - |
dc.description.abstract | Mit den vielen hochauflösenden SAR- (Radar mit synthetischer Apertur) und optischen Satelliten, die sich im Orbit befinden, werden auch die zugehörigen Bildarchive ständig größer und aktualisiert, da täglich neue hochaufgelöste Bilder aufgenommen werden. Daraus ergeben sich neue Perspektiven und Herausforderungen an eine automatische Interpretation von hochaufgelösten Satellitenbildern zur detaillierten semantischen Annotation und Objekt-Extraktion. Dazu kommt, dass das florierende Gebiet des maschinellen Lernens die Leistungskraft von Computer-Algorithmen gezeigt hat, die ihre "Intelligenz" zur Lösung zahlreicher und verschiedenartiger Anwendungsfälle (wie visuelle Objekterkennung, inhaltsbasierte Bildsuche etc.) bereits allgemein demonstriert haben. Allerdings können die vorgeschlagenen und bereits existierenden Methoden momentan nur eine begrenzte Anzahl von Bildern verarbeiten. Daher wird in dieser Dissertation versucht, Informationen aus großen Mengen von Satellitenbildern zu extrahieren. Wir bieten Lösungen zur halbautomatischen Interpretation von Satellitenbildinhalten auf der Ebene von Bild-ausschnitten und von Pixeln, bis hin zur Objekt-Ebene mit hochaufgelösten Bildern von TerraSAR-X und WorldView-2. Hierbei wird das Analyse-Potential von nicht überwachten Lernverfahren zur Verarbeitung von großen Datenmengen genutzt. | de |
dc.description.abstract | With a number of high-resolution Synthetic Aperture Radar (SAR) and optical satellites in orbit, the corresponding image archives are continuously increasing and updated as new high-resolution images are being acquired everyday. New perspectives and challenges for the automatic interpretation of high-resolution satellite imagery for detailed semantic annotation and object extraction have been raised up. What’s more, the booming machine learning field has proved the power of computer algorithms by presenting the world their "intelligence" to solve numerous and diverse applications, visual object recognition, content-based image retrieval, etc. However, till now, the proposed and already existing methods are usually able to process only a limited amount of images. Hence, this dissertation tries to extract information from large amounts of satellite imagery. We provide solutions for the semi-automatic interpretation of satellite image content from patch-level and pixel-level to object-level, using the high-resolution imagery provided by TerraSAR-X and WorldView-2. The mining potential of unsupervised learning methods is utilized for the processing of large amounts of data. With large amounts of data, our solutions try to simplify the problem at the first step based on a simple assumption. A Gaussian distribution assumption is applied to describe image clusters obtained via a clustering method. Based on the already grouped image patch clusters, a semi-supervised cluster-then-classify framework is proposed for the semantic annotation of large datasets. We design a multi-layer scheme that offers a great opportunity to describe image contents from three perspectives. The first perspective represents image patches in a hierarchical tree structure, similar patches are grouped together, and are semantically annotated. The second perspective characterizes the intensity and SAR speckle information in order to get a pixel-level classification for general land cover categories. The third perspective allows an object-level interpretation. Here, the information of location and similarity among elements are taken into account, and an SVM-based active learning concept is implemented to update iteratively the so-called "non-locality" map which can be used for object extraction. A further exploitation of our approach could be to introduce a hierarchical structure for SAR and optical data in the way the patch-level, pixel-level and object-level image interpretation are connected to each other. Hence, starting from a whole scene, general and detailed levels of information can be extracted. Such fusions between different levels have achieved promising results towards an automated semantic annotation for large amounts of high-resolution satellite images. This dissertation also demonstrates up to which level information can be extracted from each data source. | en |
dc.identifier.uri | https://dspace.ub.uni-siegen.de/handle/ubsi/1278 | - |
dc.identifier.urn | urn:nbn:de:hbz:467-12787 | - |
dc.language.iso | en | en |
dc.rights.uri | https://dspace.ub.uni-siegen.de/static/license.txt | de |
dc.subject.ddc | 620 Ingenieurwissenschaften und Maschinenbau | de |
dc.subject.other | object-level optical image interpretation | en |
dc.subject.other | Bayesian model | en |
dc.subject.other | active learning | en |
dc.subject.other | High-resolutionTerraSAR-X data | en |
dc.subject.other | pixel-level SAR image interpretation | en |
dc.subject.swb | SAR | de |
dc.subject.swb | Fernerkundung | de |
dc.subject.swb | Data mining | de |
dc.subject.swb | Speckle | de |
dc.title | Semantic annotation and object extraction for very high resolution satellite images | en |
dc.type | Doctoral Thesis | de |
item.fulltext | With Fulltext | - |
ubsi.date.accepted | 2017-12-15 | - |
ubsi.publication.affiliation | Institut für Kommunikations- und Informationstechnik | de |
ubsi.subject.ghbs | TVV | - |
ubsi.subject.ghbs | XVWD | - |
ubsi.subject.ghbs | YGE | - |
ubsi.type.version | publishedVersion | de |
Appears in Collections: | Hochschulschriften |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Dissertation_Wei_Yao.pdf | 39.78 MB | Adobe PDF | View/Open |
This item is protected by original copyright |
Page view(s)
499
checked on Dec 26, 2024
Download(s)
132
checked on Dec 26, 2024
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.