publications

tem wrot paper for cool leg

research papers

  • Art-directable Stroke-based Rendering on Mobile Devices
    2023
    scientific paper — doipdf
    Ronja Wagner, Sebastian Schulz, Max Reimann, Amir Semmo, Jürgen Döllner, Matthias Trapp
    abstract
    This paper introduces an art-directable stroke-based rendering technique for transforming photos into painterly renditions on mobile devices. Unlike previous approaches that rely on time-consuming iterative computations and explicit brush-stroke geometry, our method offers an interactive image-based implementation tailored to the capabilities of modern mobile devices. The technique places curved brush strokes in multiple passes, leveraging a texture bombing algorithm. To maintain and highlight essential details for stylization, we incorporate additional information such as image salience, depth, and facial landmarks as parameters. Our technique enables a user to control and manipulate using a wide range of parameters and masks during editing to adjust and refine the stylized image. The result is an interactive painterly stylization tool that supports high-resolution input images, providing users with an immersive and engaging artistic experience on their mobile devices.
  • Digitalization of bridge inventory via automated analysis of point clouds for generation of BIM models
    2023
    scientific paper — doipdf
    Rade Hajdin, Rico Richter, Lazar Rakic, Holger Diederich, Justus Hildebrand, Sebastian Schulz, Jürgen Döllner, Jennifer Bednorz
    abstract
    Building Information Modelling (BIM) is becoming increasingly prevalent in infrastructure asset management, as it facilitates current management practices. This includes the construction of BIM models for roads, rails, bridges, tunnels etc. Bridges are particularly challenging to digitalize due to their complex geometry. The manual construction of bridge BIM models based on 2D plans is hardly feasible due to the related workload. Given the recent advancements in the field of 3D surveying and artificial intelligence, new possibilities emerge for an automated generation of as‐is bridge BIM models. This paper presents a novel, modular framework for the automatic processing of point clouds into as‐is BIM models, based on a fusion of artificial intelligence and heuristic algorithms. Representative bridge element datasets were provided to train neural network. Trained neural network can identify elements of a bridge, which are further processed using geometric algorithms into surface and solid bridge elements. This result can be additionally enriched with semantic information from existing databases. The final BIM models are exported in the standardized vendor‐free Industry Foundation Classes (IFC) format.
  • Simulating LIDAR to Create Training Data for Machine Learning on 3d Point Clouds
    2022
    scientific paper — doipdf
    Justus Hildebrand, Sebastian Schulz, Rico Richter, Jürgen Döllner
    abstract
    3D point clouds represent an essential category of geodata used in a variety of geoinformation applications. Typically, these applications require additional semantics to operate on subsets of the data like selected objects or surface categories. Machine learning approaches are increasingly used for classification. They operate directly on 3D point clouds and require large amounts of training data. An adequate amount of high-quality training data is often not available or has to be created manually. In this paper, we introduce a system for virtual laser scanning to create 3D point clouds with semantics information by utilizing 3D models. In particular, our system creates 3D point clouds with the same characteristics regarding density, occlusion, and scan pattern as those 3D point clouds captured in the real world. We evaluate our system with different data sets and show the potential to use the data to train neural networks for 3D point cloud classification.
  • Combined visual exploration of 2d ground radar and 3d point cloud data for road environments
    2018
    scientific paper — doipdf
    Johannes Wolf, Sören Discher, Leon Masopust, Sebastian Schulz, Rico Richter, Jürgen Döllner
    abstract
    Ground-penetrating 2D radar scans are captured in road environments for examination of pavement condition and below-ground variations such as lowerings and developing pot-holes. 3D point clouds captured above ground provide a precise digital representation of the road’s surface and the surrounding environment. If both data sources are captured for the same area, a combined visualization is a valuable tool for infrastructure maintenance tasks. This paper presents visualization techniques developed for the combined visual exploration of the data captured in road environments. Main challenges are the positioning of the ground radar data within the 3D environment and the reduction of occlusion for individual data sets. By projecting the measured ground radar data onto the precise trajectory of the scan, it can be displayed within the context of the 3D point cloud representation of the road environment. We show that customizable overlay, filtering, and cropping techniques enable insightful data exploration. A 3D renderer combines both data sources. To enable an inspection of areas of interest, ground radar data can be elevated above ground level for better visibility. An interactive lens approach enables to visualize data sources that are currently occluded by others. The visualization techniques prove to be a valuable tool for ground layer anomaly inspection and were evaluated in a real-world data set. The combination of 2D ground radar scans with 3D point cloud data improves data interpretation by giving context information (e.g., about manholes in the street) that can be directly accessed during evaluation.
  • A point-based and image-based multi-pass rendering technique for visualizing massive 3D point clouds in VR environments
    2018
    scientific paper — doipdf
    Sören Discher, Leon Masopust, Sebastian Schulz
    abstract
    Real-time rendering for 3D point clouds allows for interactively exploring and inspecting real-world assets, sites, or regions on a broad range of devices but has to cope with their vastly different computing capabilities. Virtual reality (VR) applications rely on high frame rates (i.e., around 90 fps as opposed to 30 - 60 fps) and show high sensitivity to any kind of visual artifacts, which are typical for 3D point cloud depictions (e.g., holey surfaces or visual clutter due to inappropriate point sizes). We present a novel rendering system that allows for an immersive, nausea-free exploration of arbitrary large 3D point clouds on state-of-the-art VR devices such as HTC Vive and Oculus Rift. Our approach applies several point-based and image-based rendering techniques that are combined using a multipass rendering pipeline. The approach does not require to derive generalized, mesh-based representations in a preprocessing step and preserves precision and density of the raw 3D point cloud data. The presented techniques have been implemented and evaluated with massive real-world data sets from aerial, mobile, and terrestrial acquisition campaigns containing up to 2.6 billion points to show the practicability and scalability of our approach.