<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>mapping | Kudan global</title>
	<atom:link href="https://www.kudan.io/blog/tag/mapping/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.kudan.io</link>
	<description>Kudan has been providing proprietary Artificial Perception technologies based on SLAM to enable use cases with significant market potential and impact on our lives such as autonomous driving, robotics, AR/VR and smart cities</description>
	<lastBuildDate>Mon, 05 Feb 2024 06:59:47 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.8.13</generator>

 
<site xmlns="com-wordpress:feed-additions:1">179852210</site>	<item>
		<title>Kudan Announces Strategic Partnership with STS Group to Jointly Expand Digital Asset Platform Solution Business towards European Municipality and Utility Infrastructure Sectors</title>
		<link>https://www.kudan.io/blog/strategic-partnership-with-sts-group-to-jointly-expand-digital-asset-platform-solution-business/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=strategic-partnership-with-sts-group-to-jointly-expand-digital-asset-platform-solution-business</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Tue, 14 Nov 2023 06:00:29 +0000</pubDate>
				<category><![CDATA[Press Release]]></category>
		<category><![CDATA[Digital Asset Platform]]></category>
		<category><![CDATA[infrastructure]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[SLAM]]></category>
		<category><![CDATA[Solution Business]]></category>
		<category><![CDATA[Strategic Partnership]]></category>
		<category><![CDATA[STS group]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1702</guid>

					<description><![CDATA[<p>Kudan Inc. (headquarters in Shibuya-ku, Tokyo; CEO Daiu Ko, hereafter “Kudan”), a leading provider of real-time simultaneous localization and mapping (SLAM) software, is proud to announce that it has signed a Memorandum of Understanding (MOU) for strategic partnership with STS Engineering &#38; Construction Kft. (including its group companies, hereafter “STS Group”), a leading renewable energy [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/strategic-partnership-with-sts-group-to-jointly-expand-digital-asset-platform-solution-business/">Kudan Announces Strategic Partnership with STS Group to Jointly Expand Digital Asset Platform Solution Business towards European Municipality and Utility Infrastructure Sectors</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Kudan Inc. (headquarters in Shibuya-ku, Tokyo; CEO Daiu Ko, hereafter “Kudan”), a leading provider of real-time simultaneous localization and mapping (SLAM) software, is proud to announce that it has signed a Memorandum of Understanding (MOU) for strategic partnership with STS Engineering &amp; Construction Kft. (including its group companies, hereafter “STS Group”), a leading renewable energy engineering, procurement, and construction (EPC) company headquartered in Hungary. This partnership aims to transform the asset management for European municipalities and utility customers by combining Kudan&#8217;s SLAM technology and mapping productization package with STS Group&#8217;s 15-plus years of market expertise and experience in energy infrastructures.</p>
<p><img loading="lazy" class="aligncenter wp-image-1703 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2023/11/Kudan-and-STS.png?resize=852%2C193&#038;ssl=1" alt="" width="852" height="193" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2023/11/Kudan-and-STS.png?w=852&amp;ssl=1 852w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2023/11/Kudan-and-STS.png?resize=300%2C68&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2023/11/Kudan-and-STS.png?resize=768%2C174&amp;ssl=1 768w" sizes="(max-width: 852px) 100vw, 852px" data-recalc-dims="1" /></p>
<p>In the dynamic landscape of public and utility infrastructure sectors, challenges such as aging infrastructure, workforce shortages, escalating regulatory requirements, and costly operational procedures are becoming increasingly apparent in recent years. Traditional methods that rely on manual inspections, fragmented data management, and limited visibility have created barriers to the optimal functioning of these crucial infrastructures. The strategic alliance between Kudan and STS Group seeks to address these challenges head-on, bringing together Kudan&#8217;s SLAM-based mobile mapping technology and digital asset platform solution offering with STS Group&#8217;s extensive industry experience in public infrastructure to guide the asset management in these sectors to next level.</p>
<p>&#8220;We are excited to partner with STS Group and leverage their experience in the energy industry and large-scale project as main contractor to further enhance and expand the scope of our SLAM technology and turnkey End-to-End Digital Asset Platform Solution,&#8221; said Daiu Ko, CEO of Kudan. “This cooperation will help us to deliver our state-of-art asset digitization solutions faster and provide them for Municipalities, Energy Companies, Utilities and Transmission System Operators enabling them to achieve significant productivity increase, faster design and a highly effective Operation and Maintenance against their critical assets.&#8221;</p>
<p>&#8220;STS Group has been a leading group of companies in the Hungarian energy market for 20 years, designing and constructing power grids and power plants, renewable energy power plants, operation and maintenance.” said Tamás Gyepes, Founder and Group CEO of STS Group. “Digitalisation in this industry is opening an increasingly important space for design and operation &amp; maintenance. We observe a very strong connection in its development with the use of Kudan’s technology and solution, which open up a new business opportunity in the Europe electricity and utility market. Digitalization could lead to significant technological advances and efficiency improvements in the future energy sector, and the collaboration between Kudan and STS Group would play a crucial role in contributing to these advancements.&#8221;</p>
<p>The partnership between Kudan and STS Group represents a powerful alliance that promises innovation, efficiency, and excellence in the realm of digital asset management. As both companies embark on this critical business together, we are committed to working closely and transforming the asset operation and management for the municipality and utility infrastructure sectors in Europe.</p>
<p><strong>About STS Group.</strong><br />
Founded in 2002, STS Group is an internationally recognized Engineering, Procurement, and Construction (EPC) company with over 15 years of experience. Specializing in High Voltage infrastructure projects, Renewable Energy initiatives, and Energy Asset Operation and Maintenance, STS Group is committed to delivering high-quality solutions across the energy sector. Learn more at <a href="https://stsgroup.hu/" target="_blank" rel="noopener">https://stsgroup.hu/</a>.</p>
<p><strong>About Kudan Inc.</strong><br />
Kudan is a deep tech research and development company specializing in algorithms for artificial perception (AP). As a complement to artificial intelligence (AI), AP functions allow machines to develop autonomy. Currently, Kudan is using its high-level technical innovation to explore business areas based on its own milestone models established for deep tech which provide wide-ranging impact on several major industrial fields.<br />
For more information, please refer to Kudan’s website at <a href="https://www.kudan.io/" target="_blank" rel="noopener">https://www.kudan.io/</a>.</p>
<p>■Company Details<br />
Name: Kudan Inc.<br />
Securities Code: 4425 (TSE Growth)<br />
Representative: CEO Daiu Ko</p>
<p>■For more details, please contact us from <a href="https://www.kudan.io/contact" target="_blank" rel="noopener">here</a>.</p><p>The post <a href="https://www.kudan.io/blog/strategic-partnership-with-sts-group-to-jointly-expand-digital-asset-platform-solution-business/">Kudan Announces Strategic Partnership with STS Group to Jointly Expand Digital Asset Platform Solution Business towards European Municipality and Utility Infrastructure Sectors</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1702</post-id>	</item>
		<item>
		<title>Kudan launched its affordable mobile mapping dev kit for vehicle and handheld</title>
		<link>https://www.kudan.io/blog/kudan-launched-its-affordable-mobile-mapping-dev-kit-for-vehicle-and-handheld/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=kudan-launched-its-affordable-mobile-mapping-dev-kit-for-vehicle-and-handheld</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Thu, 24 Nov 2022 06:01:06 +0000</pubDate>
				<category><![CDATA[Press Release]]></category>
		<category><![CDATA[3D-Lidar SLAM]]></category>
		<category><![CDATA[KdLidar]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[Mobile Mapping Dev Kit]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<category><![CDATA[SLAM]]></category>
		<category><![CDATA[Solution]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1563</guid>

					<description><![CDATA[<p>Kudan Inc. (headquarters in Shibuya-ku, Tokyo; CEO Daiu Ko, hereafter “Kudan”) is pleased to announce that Kudan launched its own development kit for vehicle mount and handheld mobile mapping applications using Kudan 3D-Lidar SLAM (KdLidar). There have been considerable demands for a packaged solution of hardware and our 3D-Lidar SLAM software so that customers can [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/kudan-launched-its-affordable-mobile-mapping-dev-kit-for-vehicle-and-handheld/">Kudan launched its affordable mobile mapping dev kit for vehicle and handheld</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Kudan Inc. (headquarters in Shibuya-ku, Tokyo; CEO Daiu Ko, hereafter “Kudan”) is pleased to announce that Kudan launched its own development kit for vehicle mount and handheld mobile mapping applications using Kudan 3D-Lidar SLAM (KdLidar).</p>
<p><img loading="lazy" class="aligncenter wp-image-1564 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/11/pic1_1124.png?resize=939%2C439&#038;ssl=1" alt="" width="939" height="439" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/11/pic1_1124.png?w=939&amp;ssl=1 939w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/11/pic1_1124.png?resize=300%2C140&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/11/pic1_1124.png?resize=768%2C359&amp;ssl=1 768w" sizes="(max-width: 939px) 100vw, 939px" data-recalc-dims="1" /></p>
<p>There have been considerable demands for a packaged solution of hardware and our 3D-Lidar SLAM software so that customers can easily get point clouds using our SLAM software. Even though there are existing mobile mapping solutions in the market, some segments of the geospatial industry are looking for a Dev Kit type of solution with better price-to-performance, increased flexibility of parameter configuration, and easier operation for essential functions. We observed that this requirement is prominent in the academic and research sector, and in order to respond to this market demand, we launched our own mobile mapping dev kit.</p>
<p><span style="text-decoration: underline;">The most noticeable advantages of our dev kit are the following.</span></p>
<ol>
<li><strong>Excellent return on investment/price for performance</strong>: Targets half of the prices of alternative solutions without compromising the quality of point clouds</li>
<li><strong>Flexibility</strong>: Supports different parameter settings for data collection and point cloud generation to meet the needs of various application use cases.</li>
<li><strong>Simplicity</strong>: Less than an hour to start scanning after unboxing for handheld, with all essential functionalities supported.</li>
</ol>
<p>You can find more information about the dev kit on our website, so please visit and send us an inquiry for more details.</p>
<ul>
<li>Handheld: <a href="https://www.kudan.io/mapping_dev_kit/handheld_version/" target="_blank" rel="noopener">https://www.kudan.io/mapping_dev_kit/handheld_version/</a></li>
<li>Vehicle mount: <a href="https://www.kudan.io/mapping_dev_kit/vehicle_mount_version/" target="_blank" rel="noopener">https://www.kudan.io/mapping_dev_kit/vehicle_mount_version/</a></li>
</ul>
<p><img loading="lazy" class="aligncenter wp-image-1565 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/11/pic2_1124.png?resize=939%2C529&#038;ssl=1" alt="" width="939" height="529" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/11/pic2_1124.png?w=939&amp;ssl=1 939w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/11/pic2_1124.png?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/11/pic2_1124.png?resize=768%2C433&amp;ssl=1 768w" sizes="(max-width: 939px) 100vw, 939px" data-recalc-dims="1" /></p>
<p>We are continuing to offer our software to enhance existing mobile mapping solutions that rely on INS or other SLAM solutions to enhance their performance.</p>
<p><span style="text-decoration: underline;"><strong>About Kudan Inc.</strong></span><br />
Kudan is a deep tech research and development company specializing in algorithms for artificial perception (AP). As a complement to artificial intelligence (AI), AP functions allow machines to develop autonomy. Currently, Kudan is using its high-level technical innovation to explore business areas based on its own milestone models established for deep tech which provide wide-ranging impact on several major industrial fields.<br />
For more information, please refer to Kudan’s website at <a href="https://www.kudan.io/" target="_blank" rel="noopener noreferrer">https://www.kudan.io/</a>.</p>
<p>■Company Details<br />
Name: Kudan Inc.<br />
Securities Code: 4425 (TSE Growth)<br />
Representative: CEO Daiu Ko</p>
<p>■For more details, please contact us from <a href="https://www.kudan.io/contact" target="_blank" rel="noopener noreferrer">here</a>.</p><p>The post <a href="https://www.kudan.io/blog/kudan-launched-its-affordable-mobile-mapping-dev-kit-for-vehicle-and-handheld/">Kudan launched its affordable mobile mapping dev kit for vehicle and handheld</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1563</post-id>	</item>
		<item>
		<title>Kudan 3D-Lidar SLAM (KdLidar) in Action: Vehicle-Based Mapping in an Urban area</title>
		<link>https://www.kudan.io/blog/kdlidar-in-action-vehicle-based-mapping-in-an-urban-area/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=kdlidar-in-action-vehicle-based-mapping-in-an-urban-area</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Fri, 28 Oct 2022 09:00:58 +0000</pubDate>
				<category><![CDATA[Tech Blog]]></category>
		<category><![CDATA[KdLidar]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[Kudan 3D-Lidar SLAM]]></category>
		<category><![CDATA[Lidar]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<category><![CDATA[SLAM]]></category>
		<category><![CDATA[tech blog]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1514</guid>

					<description><![CDATA[<p>It’s been a while since our last “Kudan 3D-Lidar SLAM (KdLidar) in action” series. Of course, we have been working on fascinating projects and powerful features (and just came back from InterGeo in Germany!) This time, our demo showcases the result in one of the most typical environments and setups. Recorded in an urban area [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/kdlidar-in-action-vehicle-based-mapping-in-an-urban-area/">Kudan 3D-Lidar SLAM (KdLidar) in Action: Vehicle-Based Mapping in an Urban area</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>It’s been a while since our last “Kudan 3D-Lidar SLAM (KdLidar) in action” series. Of course, we have been working on fascinating projects and powerful features (and just came back from InterGeo in Germany!)</p>
<p><img loading="lazy" class="aligncenter wp-image-1515 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Blog-pic.png?resize=939%2C325&#038;ssl=1" alt="" width="939" height="325" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Blog-pic.png?w=939&amp;ssl=1 939w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Blog-pic.png?resize=300%2C104&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Blog-pic.png?resize=768%2C266&amp;ssl=1 768w" sizes="(max-width: 939px) 100vw, 939px" data-recalc-dims="1" /></p>
<p>This time, our demo showcases the result in one of the most typical environments and setups.</p>
<ul>
<li>Recorded in an urban area (station square of a large train station in Japan) with some buildings around and roofed areas, which prevents good GNSS signals</li>
<li>Recorded with a vehicle-base sensor setup with a tilted 3D-Lidar with INS</li>
</ul>
<p>Here is the demo video &#8211; <strong>Kudan 3D-Lidar SLAM in Action: Vehicle Mobile Mapping in an Urban Area</strong></p>
<p><iframe loading="lazy" title="Kudan 3D-Lidar SLAM in Action: Vehicle Mobile Mapping in an Urban Area" width="500" height="281" src="https://www.youtube.com/embed/nuBxLaVuaKA?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></p>
<p>There’s nothing much to say about this video since it’s quite straightforward and the video explains itself. However, the main benefits you get from KdLidar are …</p>
<ol>
<li><strong>Up to &lt;1cm accuracy in various environments</strong>: Tested and proven with multiple geospatial partners to perform with up to &lt;1cm accuracy both indoors and outdoors</li>
<li><strong>Flexible sensor choices</strong>: Various sensors (3D Lidar, IMU, and INS) tested and supported</li>
<li><strong>Faster time-to-market</strong>: Much faster time-to-market than internal development or other SLAM software (We have a team of 30 SLAM engineers)</li>
</ol>
<p>Please stay tuned for further updates!</p>
<p>■For more details, please contact us from <a href="https://www.kudan.io/contact" target="_blank" rel="noopener noreferrer">here</a>.</p><p>The post <a href="https://www.kudan.io/blog/kdlidar-in-action-vehicle-based-mapping-in-an-urban-area/">Kudan 3D-Lidar SLAM (KdLidar) in Action: Vehicle-Based Mapping in an Urban area</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1514</post-id>	</item>
		<item>
		<title>Endless Possibilities with SLAM and 5G/Cloud Technology Together (Part 2)</title>
		<link>https://www.kudan.io/blog/endless-possibilities-with-slam-and-5g-cloud-technology-together-part-2/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=endless-possibilities-with-slam-and-5g-cloud-technology-together-part-2</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Tue, 04 Oct 2022 09:00:36 +0000</pubDate>
				<category><![CDATA[Press Release]]></category>
		<category><![CDATA[3D-SLAM]]></category>
		<category><![CDATA[5G]]></category>
		<category><![CDATA[AR/VR]]></category>
		<category><![CDATA[autonomous vehicles]]></category>
		<category><![CDATA[cloud technology]]></category>
		<category><![CDATA[drones]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[possibility]]></category>
		<category><![CDATA[robotics]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<category><![CDATA[SLAM]]></category>
		<category><![CDATA[tech blog]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1468</guid>

					<description><![CDATA[<p>Endless Possibilities with SLAM and 5G/Cloud Technology Together (Part 2) We share the basic concept of SLAM with cloud/ 5G network and its example in autonomous mobile robots applications in our previous article. In this second part, we are going to share examples of visual positioning, AR cloud, and autonomous driving applications. We believe this [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/endless-possibilities-with-slam-and-5g-cloud-technology-together-part-2/">Endless Possibilities with SLAM and 5G/Cloud Technology Together (Part 2)</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<h1><strong>Endless Possibilities with SLAM and 5G/Cloud Technology Together (Part 2)</strong></h1>
<p><img loading="lazy" class="aligncenter wp-image-1451 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic1.png?resize=974%2C650&#038;ssl=1" alt="" width="974" height="650" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic1.png?w=974&amp;ssl=1 974w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic1.png?resize=300%2C200&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic1.png?resize=768%2C513&amp;ssl=1 768w" sizes="(max-width: 974px) 100vw, 974px" data-recalc-dims="1" /></p>
<p>We share the basic concept of SLAM with cloud/ 5G network and its example in autonomous mobile robots applications in <a href="https://docs.google.com/document/d/1ka0S3rrOwOAF8sL5zMMKzDMrnR72du4Yu6bDRWQteBs/edit?usp=sharing" target="_blank" rel="noopener">our previous article</a>. In this second part, we are going to share examples of visual positioning, AR cloud, and autonomous driving applications.</p>
<p>We believe this will help you understand various use cases where SLAM and 5G/Cloud can play a role together.</p>
<h2><strong>Use-case examples in visual positioning</strong></h2>
<p>We have seen an increasing demand for use cases in the visual positioning of people or machines in an indoor setting together with AMR.</p>
<p>One example is <strong>understanding the positions of operators in industrial facilities such as a warehouse or a power plant</strong> so that the user can provide a warning to operators when they get close to a hazardous area. The user can also use this information to improve the productivity of operators.</p>
<p>In these applications, the cost of the hardware they wear needs to be very affordable as the solution requires as many units as the number of operators. Generally, more than 100 units are required in a warehouse.</p>
<p>In such a scenario, offloading SLAM processing onto the cloud is the key to minimizing hardware costs. Figure 1 illustrates a suggested architecture to achieve this.</p>
<div id="attachment_1469" style="width: 984px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1469" loading="lazy" class="wp-image-1469 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic1.png?resize=974%2C533&#038;ssl=1" alt="" width="974" height="533" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic1.png?w=974&amp;ssl=1 974w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic1.png?resize=300%2C164&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic1.png?resize=768%2C420&amp;ssl=1 768w" sizes="(max-width: 974px) 100vw, 974px" data-recalc-dims="1" /><p id="caption-attachment-1469" class="wp-caption-text"><em>Figure 1: High-level architecture of SLAM on cloud example of operator positioning in a warehouse.</em></p></div>
<p>A stereo camera can be used to create a map instead of a mono camera, resulting in more stable performance. However, each positioning/ tracking device mounted can be a mono camera mounted on the helmet to minimize the hardware cost.</p>
<p>This application doesn’t need 5G necessarily, but 5G ensures mission-critical connectivity and low latency for these operators.</p>
<p>Another noteworthy example of visual positioning is <strong>forklift position tracking</strong>.</p>
<p>Many companies deploy a fleet of manual forklifts and want to understand how efficiently they operate and improve overall productivity. The key to achieving this is recognizing the position of the forklifts.</p>
<p>Since multiple forklifts would be deployed, the hardware costs need to be a bare minimum, and 2D or 3D lidars aren’t an option. We can create a map with one forklift by scanning the area and then using this map to get the positions of forklifts in the area using their stereo camera or even a mono camera, depending on the required accuracy. This architecture is further painted in figure 2.</p>
<div id="attachment_1470" style="width: 984px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1470" loading="lazy" class="wp-image-1470 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic2.png?resize=974%2C533&#038;ssl=1" alt="" width="974" height="533" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic2.png?w=974&amp;ssl=1 974w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic2.png?resize=300%2C164&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic2.png?resize=768%2C420&amp;ssl=1 768w" sizes="(max-width: 974px) 100vw, 974px" data-recalc-dims="1" /><p id="caption-attachment-1470" class="wp-caption-text"><em>Figure 2: High-level architecture of SLAM on cloud example of forklift position tracking</em></p></div>
<p>This scenario doesn’t demand the position of forklifts at every camera image frame (such as 30 fps) but does require something around 1 FPS. So the system could only send 1 pair of images (from the left and right lenses) every second to understand the overview of forklift operations.</p>
<p>As you may have already understood, this application doesn’t need 5G, but 5G would make communication more reliable.</p>
<h2><strong>AR Cloud use case example of SLAM and 5G/Cloud</strong></h2>
<p>Augmented Reality (AR) Cloud implies an AR application using a map stored on the cloud and also using the position of the device on that map. This is another common usage of SLAM.</p>
<p>However, it is hard to meet the computation and memory requirements for augmented reality applications [1].</p>
<p>For example, a person holds his smartphone and looks around with it. The smartphone understands where it is and which direction the person is facing.</p>
<p>So an AR cloud app can show the direction to a specific location based on this or overlay ads on the actual scenery on the screen. Figure 3 shows the simple architecture of this application.</p>
<div id="attachment_1471" style="width: 984px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1471" loading="lazy" class="wp-image-1471 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic3.png?resize=974%2C533&#038;ssl=1" alt="" width="974" height="533" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic3.png?w=974&amp;ssl=1 974w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic3.png?resize=300%2C164&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic3.png?resize=768%2C420&amp;ssl=1 768w" sizes="(max-width: 974px) 100vw, 974px" data-recalc-dims="1" /><p id="caption-attachment-1471" class="wp-caption-text"><em>Figure 3: High-level architecture of SLAM on cloud example of AR cloud.</em></p></div>
<h2><strong>Autonomous driving using SLAM and 5G/Cloud</strong></h2>
<p>SLAM on the cloud provides another level of scalability and flexibility to autonomous driving applications.</p>
<p>One of the main challenges of autonomous driving at scale is maintaining up-to-date maps on each of the vehicles. A straightforward solution is to keep a master map on the cloud and let each vehicle consume the necessary part of this map as they drive.</p>
<p>Now the problem gets simpler, as the task is to maintain that single map. So how can we keep this single map up-to-date?</p>
<div id="attachment_1472" style="width: 984px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1472" loading="lazy" class="wp-image-1472 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic4.png?resize=974%2C527&#038;ssl=1" alt="" width="974" height="527" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic4.png?w=974&amp;ssl=1 974w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic4.png?resize=300%2C162&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/10/Pic4.png?resize=768%2C416&amp;ssl=1 768w" sizes="(max-width: 974px) 100vw, 974px" data-recalc-dims="1" /><p id="caption-attachment-1472" class="wp-caption-text"><em>Figure 4: High-level architecture of SLAM on cloud example in autonomous driving.</em></p></div>
<p>One approach is using sensor data from each vehicle and updating the single map whenever needed using this data. Thus each vehicle not only consumes the map data but also acts as a mapping agent, as seen in figure 4.</p>
<p>The system could adjust how what portion of the map each vehicle downloads based on the vehicle speed and the network speed. As you already might have guessed, 5G can help download the partial map with low latency and stable connectivity to each vehicle.</p>
<h2><strong>Final words</strong></h2>
<p>This article showed how cloud and 5G communication technology could help SLAM be widely adopted across multiple use cases.</p>
<p>The examples we showed prove that this crossover of technologies adds more flexibility to how the SLAM can be used. Of course, the use cases we’ve listed here may not be all, and you may have your specific use case requirements for the business problems you have at hand.</p>
<p><a href="https://www.kudan.io/contact/" target="_blank" rel="noopener">Say hi</a>, and we’d be happy to help you transform your business through our SLAM solutions!</p>
<h2><strong>References</strong></h2>
<p>[1] Jiao, J., Yun, P. and Liu, M. (2017). A Cloud-Based Visual SLAM Framework for Low-Cost Agents. 471–484. [<a href="https://www.researchgate.net/profile/Jianhao-Jiao/publication/320301941_A_Cloud-Based_Visual_SLAM_Framework_for_Low-Cost_Agents/links/5a37a8b0a6fdccdd41fc98ee/A-Cloud-Based-Visual-SLAM-Framework-for-Low-Cost-Agents.pdf" target="_blank" rel="noopener">PDF</a>]</p><p>The post <a href="https://www.kudan.io/blog/endless-possibilities-with-slam-and-5g-cloud-technology-together-part-2/">Endless Possibilities with SLAM and 5G/Cloud Technology Together (Part 2)</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1468</post-id>	</item>
		<item>
		<title>Endless Possibilities with SLAM and 5G/Cloud Technology Together (Part 1)</title>
		<link>https://www.kudan.io/blog/endless-possibilities-with-slam-and-5g-cloud-technology-together-part-1/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=endless-possibilities-with-slam-and-5g-cloud-technology-together-part-1</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Tue, 27 Sep 2022 09:00:24 +0000</pubDate>
				<category><![CDATA[Press Release]]></category>
		<category><![CDATA[3D-SLAM]]></category>
		<category><![CDATA[5G]]></category>
		<category><![CDATA[AR/VR]]></category>
		<category><![CDATA[autonomous vehicles]]></category>
		<category><![CDATA[cloud technology]]></category>
		<category><![CDATA[drones]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[possibility]]></category>
		<category><![CDATA[robotics]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<category><![CDATA[SLAM]]></category>
		<category><![CDATA[tech blog]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1450</guid>

					<description><![CDATA[<p>Endless Possibilities with SLAM and 5G/Cloud Technology Together (Part 1) 3D-SLAM is of the instrumental technologies across several use-cases involving robotics, mapping, drones, autonomous vehicles, and AR/VR. Not second to any, the cloud and 5G communications are disrupting the technology industry with multiple use cases. What possibilities can be unlocked if we can combine these [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/endless-possibilities-with-slam-and-5g-cloud-technology-together-part-1/">Endless Possibilities with SLAM and 5G/Cloud Technology Together (Part 1)</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<h1><strong>Endless Possibilities with SLAM and 5G/Cloud Technology Together (Part 1)</strong></h1>
<p><img loading="lazy" class="aligncenter wp-image-1451 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic1.png?resize=974%2C650&#038;ssl=1" alt="" width="974" height="650" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic1.png?w=974&amp;ssl=1 974w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic1.png?resize=300%2C200&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic1.png?resize=768%2C513&amp;ssl=1 768w" sizes="(max-width: 974px) 100vw, 974px" data-recalc-dims="1" /></p>
<p>3D-SLAM is of the instrumental technologies across several use-cases involving robotics, mapping, drones, autonomous vehicles, and AR/VR. Not second to any, the cloud and 5G communications are disrupting the technology industry with multiple use cases.</p>
<p>What possibilities can be unlocked if we can combine these two disruptive technologies in use cases?</p>
<p>At Kudan, we have already delivered multiple projects in the crossover: SLAM on the cloud and SLAM using 5G communications. This article will further explore and paint a clear picture of some of the problems that can be solved by the 3D-SLAM and 5G/cloud technologies.</p>
<p>As a first step, let’s understand how these technologies can be combined in the use cases.</p>
<hr />
<h2><strong>Understanding the crossover of SLAM and Cloud technology</strong></h2>
<p>Kudan Visual SLAM can run 5–10 times faster than other 3D SLAM algorithms (eg: ORB-SLAM2 on some Arm-based processors). Still, as it requires iteration of optimization using a large number of 3D point information, 3D-SLAM can be heavy and process-intensive for tight processing budget applications.</p>
<p>As a result, not all applications can afford the processing hardware that is suitable for 3D SLAM [1]. Often this acts as a blocker for adopting 3D SLAM for use cases. The ability to offload the SLAM process elsewhere can help adopt the technology into many more use cases.</p>
<p><strong>Then cloud comes into play here.</strong> We can offload the SLAM process to the cloud. Let us explain through an example: for robotic applications, the robots can send the images from their sensors to the cloud. The SLAM process can now run on the cloud and send the information only regarding the poses back to the robot. The robot can use this information for its own control and motion.</p>
<p>This is truly disruptive and SLAM on the cloud already has significant potential to expand the usage of SLAM to multiple use cases. The figure 1 below does a great job of explaining the architecture in the usage of cloud alongside SLAM.</p>
<div id="attachment_1452" style="width: 984px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1452" loading="lazy" class="wp-image-1452 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic2.png?resize=974%2C477&#038;ssl=1" alt="" width="974" height="477" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic2.png?w=974&amp;ssl=1 974w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic2.png?resize=300%2C147&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic2.png?resize=768%2C376&amp;ssl=1 768w" sizes="(max-width: 974px) 100vw, 974px" data-recalc-dims="1" /><p id="caption-attachment-1452" class="wp-caption-text"><em>Figure 1: High-level architecture of Visual SLAM on the cloud.</em></p></div>
<h2><strong>Deteriorating accuracy and role of 5G communication technology</strong></h2>
<p>The problem we faced for some SLAM-on-cloud projects was the latency and robust connectivity that wasn’t up to the mark when sending the image stream to the cloud and receiving the required information back.</p>
<p>For use cases requiring continuous pose information at 10Hz or more, this is crucial. And as a result, we experienced deteriorated accuracy and an increased risk of collision in the robots. We needed a way to communicate better between the robots and the cloud.</p>
<p><strong>That’s when 5G technologies becomes relevant with SLAM.</strong> It provides higher speeds, superior reliability, and negligible latency. In our context, image sequences from multiple robots can be sent through 5G technology without significant latency. It enabled the application to use low-cost, low-power hardware for edge devices while still benefiting from 3D SLAM features.</p>
<p>You now probably have a good understanding of the architecture and the usage of both cloud and 5G technology for SLAM. Let’s now visit a use-case example that leverage these technologies.</p>
<hr />
<h2><strong>Use-case example: Autonomous Mobile Robots (AMRs)</strong></h2>
<p>Visual SLAM functionality can be added to an existing autonomous mobile robot (AMR) to make its localization performance more robust and stable.</p>
<p>Many AMRs use 2D-Lidar SLAM for localization. Though it shows acceptable performance for SLAM-friendly environments, it struggles in environments where the scenery is constantly changing or occasional outdoor operations such as between factories.</p>
<p>Visual SLAM has a clear advantage when fused with its 2D counterpart in these scenarios. Figure 2 illustrates how the Visual SLAM can be used with the cloud service.</p>
<div id="attachment_1453" style="width: 984px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1453" loading="lazy" class="wp-image-1453 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic3.png?resize=974%2C449&#038;ssl=1" alt="" width="974" height="449" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic3.png?w=974&amp;ssl=1 974w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic3.png?resize=300%2C138&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/09/Pic3.png?resize=768%2C354&amp;ssl=1 768w" sizes="(max-width: 974px) 100vw, 974px" data-recalc-dims="1" /><p id="caption-attachment-1453" class="wp-caption-text"><em>Figure 2: High-level architecture of Visual SLAM on cloud example in robotics applications.</em></p></div>
<p>As shown in the architecture above, you can obtain the pose from Visual SLAM through the cloud and fuse it with 2D-Lidar SLAM pose to get more redundancy.</p>
<p>The frequency of sending images can be flexible depending on the purpose of Visual SLAM and the availability of 5G.</p>
<p>In instances where there is a good 5G network available, you may send an image stream at 30 frames per second (FPS). When not available, you may send an image every second (1 FPS) and use it as an aiding approach on top of 2D-Lidar SLAM.</p>
<p>Another option is to dynamically change between Visual SLAM and 2D Lidar SLAM; for instance, when the robot is outdoor, it sends images more frequently so that it can rely solely on Visual SLAM, and when indoors, it sends at a lower frequency indoors to limit network usage.</p>
<p>As you can see, you suddenly have many options to enhance the system&#8217;s overall performance.</p>
<h2><strong>Final words</strong></h2>
<p>Did you really think that’s all the use case examples of SLAM with 5G/Cloud? There’s plenty more. But let’s stop here for now as we already introduced many new ideas around 3D SLAM.</p>
<p>Many more practical examples will be discussed in more detail in part 2 of this article which includes examples from Visual positioning and autonomous driving. So you can have an even more detailed understanding of the applications of SLAM. Stay tuned!</p>
<p>Meanwhile, feel free to <a href="https://www.kudan.io/contact/" target="_blank" rel="noopener">say hi</a>, and we’d be happy to help you transform your business through our SLAM solutions!</p>
<hr />
<h2><strong>References</strong></h2>
<p>[1] Kamburugamuve, S., He, H. &amp; Fox, G. and Crandall, D. (2016). Cloud-based Parallel Implementation of SLAM for Mobile Robots. [<a href="https://www.researchgate.net/profile/Supun-Kamburugamuve/publication/296692114_Cloud-based_Parallel_Implementation_of_SLAM_for_Mobile_Robots/links/5a22d0164585155dd41c89d3/Cloud-based-Parallel-Implementation-of-SLAM-for-Mobile-Robots.pdf" target="_blank" rel="noopener">PDF</a>]</p><p>The post <a href="https://www.kudan.io/blog/endless-possibilities-with-slam-and-5g-cloud-technology-together-part-1/">Endless Possibilities with SLAM and 5G/Cloud Technology Together (Part 1)</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1450</post-id>	</item>
		<item>
		<title>How to Tune 3D-Lidar SLAM Parameters</title>
		<link>https://www.kudan.io/blog/how-to-tune-3d-lidar-slam-parameters/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-to-tune-3d-lidar-slam-parameters</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Wed, 13 Jul 2022 07:00:16 +0000</pubDate>
				<category><![CDATA[Tech Blog]]></category>
		<category><![CDATA[3D-Lidar SLAM]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[Kudan 3D-Lidar SLAM]]></category>
		<category><![CDATA[KudanSLAM]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[robotics]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<category><![CDATA[voxel]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1330</guid>

					<description><![CDATA[<p>Do you want to implement 3D-Lidar SLAM for your use case successfully? Have you heard of the 3D-Lidar SLAM but are unsure how to maximize its performance? Do you find tweaking the 3D-Lidar SLAM parameters complex and confusing? If you answered yes to any of these questions, you would find a lot of value in [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/how-to-tune-3d-lidar-slam-parameters/">How to Tune 3D-Lidar SLAM Parameters</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" class="aligncenter wp-image-1334 size-large" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Picture1-1024x473.png?resize=1024%2C473&#038;ssl=1" alt="" width="1024" height="473" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Picture1.png?resize=1024%2C473&amp;ssl=1 1024w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Picture1.png?resize=300%2C139&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Picture1.png?resize=768%2C355&amp;ssl=1 768w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Picture1.png?resize=1536%2C710&amp;ssl=1 1536w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Picture1.png?w=1595&amp;ssl=1 1595w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /></p>
<p>Do you want to implement 3D-Lidar SLAM for your use case successfully?</p>
<p>Have you heard of the 3D-Lidar SLAM but are unsure how to maximize its performance?</p>
<p>Do you find tweaking the 3D-Lidar SLAM parameters complex and confusing?</p>
<p>If you answered yes to any of these questions, you would find a lot of value in this article. Once you’ve decided to use 3D-Lidar SLAM for your business use case, you will need to tick off these three checkboxes:</p>
<ol>
<li>Pick an appropriate 3D-Lidar sensor unit</li>
<li>Decide on a proper 3D-Lidar SLAM approach</li>
<li>Choose the relevant parameters for the SLAM system</li>
</ol>
<p>If you are unsure of the first two checkboxes, we’ve written about the 3D-Lidar SLAM approach and how you can select the appropriate sensor for your use case. We’ll leave the links at the end of this article for you to delve deeper if you haven’t already.</p>
<p>In this article, we’ll share some descriptions, typical values, and high-level guidelines on the 4 main parameters, which commonly are included in many 3D-Lidar SLAM approaches. We’ll also talk about how these parameters may impact the overall performance of the system.</p>
<hr />
<h2>Voxel sizes and why they matter</h2>
<p><strong>Smaller voxel size improves robustness and accuracy but has a price to pay</strong></p>
<p>For a SLAM system, we treat the 3D space as a group of small 3D spaces known as <a href="https://en.wikipedia.org/wiki/Voxel" target="_blank" rel="noopener">voxels</a>. <strong>Voxel size</strong> indicates how granular or coarse you want to dissect the 3D space. In other words, what dimension should a voxel be?</p>
<p>In theory, a smaller voxel size means improved robustness and accuracy even though it saturates at some point. This is because when the voxel size is small, the SLAM system dissects the 3D space into many smaller 3D spaces and thus has been able to extract more points for tracking and mapping. This ability to capture more points allows for more points to track against when moving, resulting in greater robustness and accuracy.</p>
<p>However, when the system has more points to process for tracking and mapping, the system is inevitably slower, introducing the processing time vs. accuracy trade-off.</p>
<p>So when should you lower the voxel size? We recommend using a smaller voxel size if you are in a small indoor place near because the points concentrate in a small 3D space and it’s not possible to have meaningful number of points if voxel size is large. Also, we recommend to do so for mapping applications when real-time processing isn’t required.</p>
<p>Here’s what typical voxel sizes in meters may look like:</p>
<div id="attachment_1335" style="width: 1034px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1335" loading="lazy" class="wp-image-1335 size-large" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Voxel-size-1024x492.png?resize=1024%2C492&#038;ssl=1" alt="" width="1024" height="492" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Voxel-size.png?resize=1024%2C492&amp;ssl=1 1024w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Voxel-size.png?resize=300%2C144&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Voxel-size.png?resize=768%2C369&amp;ssl=1 768w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Voxel-size.png?w=1206&amp;ssl=1 1206w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /><p id="caption-attachment-1335" class="wp-caption-text"><em>Figure 2: Typical values for voxel size (in meter)</em></p></div>
<p>As a thumb of the rule, we would start from 1.0m for outdoor robotics, 0.5m for large indoor robotics, and 0.3m for small indoor (e.g., office space) robotics. As you may have noticed, the parameters are relatively low for mapping to gain more accuracy in sacrifice of processing speed in general.</p>
<p>However, in the case that your SLAM system doesn’t track well with the suggested voxel size, we would recommend trying a smaller voxel size. Conversely, if your SLAM system cannot catch up with real-time feed, try a larger voxel size to reduce the number of points used.</p>
<p>The key is understanding the impact of the voxel size parameter to inform the decision on changing the value as required, which you do now.</p>
<hr />
<h2><strong>Understand the maximum distance for matching the current frame to the map</strong></h2>
<p>This parameter indicates how far you want to use an existing keyframe in the Iterative Closest Point [1] process to determine the current pose instead of creating a new one.</p>
<p>In other words, this parameter determines how frequently you want to create a new keyframe. The longer the distance you set, the less frequent the system creates a new keyframe.</p>
<p>Some typical values (in meters) are given in the image below:</p>
<div id="attachment_1331" style="width: 1034px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1331" loading="lazy" class="wp-image-1331 size-large" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Maximum-distance-to-match-1024x427.png?resize=1024%2C427&#038;ssl=1" alt="" width="1024" height="427" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Maximum-distance-to-match.png?resize=1024%2C427&amp;ssl=1 1024w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Maximum-distance-to-match.png?resize=300%2C125&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Maximum-distance-to-match.png?resize=768%2C320&amp;ssl=1 768w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Maximum-distance-to-match.png?w=1389&amp;ssl=1 1389w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /><p id="caption-attachment-1331" class="wp-caption-text"><em>Figure 3: Typical values for maximum distance for matching frames to map (in meters)</em></p></div>
<p>For indoor applications, we set smaller values since we’d like to have more frequent keyframes than our outdoor use cases to detect movements more precisely.</p>
<hr />
<h2><strong>Pay attention to the voxel size when deciding on the minimum matched points to track</strong></h2>
<p>The parameter “minimum number of matched points to track” sets the threshold to decide whether or not the tracking has failed.</p>
<p>When the number of matched points between the keyframes and the current frame is below this threshold value, the SLAM system is said to be ‘lost’.</p>
<p>Intuitively, you can say that the smaller the number of matched points that are required to track, the easier it is for the SLAM system to continue tracking without getting lost. However, setting a too small number introduces false tracking and accumulates larger drift.</p>
<p>It’s essential to consider the voxel size you have already set in order to select the appropriate value for this parameter. If you’ve set a larger voxel size, you’d have fewer points for matching; the minimum matched points needs to be set lower.</p>
<p>The number below is the example for our SLAM system which uses absolute values, but it could be relative numbers or percentages as well.</p>
<div id="attachment_1333" style="width: 1034px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1333" loading="lazy" class="wp-image-1333 size-large" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Minimum-to-track-1024x375.png?resize=1024%2C375&#038;ssl=1" alt="" width="1024" height="375" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Minimum-to-track.png?resize=1024%2C375&amp;ssl=1 1024w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Minimum-to-track.png?resize=300%2C110&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Minimum-to-track.png?resize=768%2C281&amp;ssl=1 768w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Minimum-to-track.png?w=1187&amp;ssl=1 1187w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /><p id="caption-attachment-1333" class="wp-caption-text"><em>Figure 4: Typical values for the minimum number of points to track</em></p></div>
<p>If the environment has rich structures, set a higher value while reducing it when there are fewer objects around the lidar (e.g., a fairly open field with several trees). Thus, indoor environments typically have higher values than outdoor environments.</p>
<hr />
<h2><strong>Setting the minimum number of match points to relocalize</strong></h2>
<p>The parameter “minimum number of points to relocalize” sets a threshold to decide if the initial position on the map is identified.</p>
<p>When the number of matched points between the keyframes and the current frame is above the given threshold value, the SLAM system considers it as being “relocalized.” Here’s an example from our SLAM system. Again we’re using absolute values here, but it could be relative numbers or percentages as well.</p>
<div id="attachment_1332" style="width: 1034px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1332" loading="lazy" class="wp-image-1332 size-large" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Minimum-to-relocalize-1024x358.png?resize=1024%2C358&#038;ssl=1" alt="" width="1024" height="358" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Minimum-to-relocalize.png?resize=1024%2C358&amp;ssl=1 1024w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Minimum-to-relocalize.png?resize=300%2C105&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Minimum-to-relocalize.png?resize=768%2C269&amp;ssl=1 768w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/Minimum-to-relocalize.png?w=1243&amp;ssl=1 1243w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /><p id="caption-attachment-1332" class="wp-caption-text"><em>Figure 5: Values for “minimum number of points to relocalize”</em></p></div>
<p>When you set this parameter to a smaller value, the SLAM finds it easier to relocalize. However, selecting a too-small value introduces false relocalization [2].</p>
<p>A rule of thumb is to set it to 1500 and depending on how the SLAM behaves (eg. more loop closures performed or higher experience of false relocalization) you can reduce or increase it accordingly.</p>
<hr />
<h2><strong>Final words</strong></h2>
<p>At Kudan, through our blog, we’ve aimed to distill the information and present it as simply as possible. Here are some of our previous articles on 3D Lidar SLAM:</p>
<ul>
<li><a href="https://www.kudan.io/blog/3d-lidar-slam-the-basics/" target="_blank" rel="noopener">3D lidar SLAM: The Basics</a></li>
<li><a href="https://www.kudan.io/blog/how-to-select-the-best-3d-lidar-for-slam/" target="_blank" rel="noopener">How to Select the Best 3D Lidar for SLAM</a></li>
</ul>
<p>In this article, we’ve presented the goal of tuning 3D-Lidar SLAM parameters, general guidelines, typical values, and their impact on the system&#8217;s performance. The list of parameters to tweak and additional tricks to improve SLAM accuracy and robustness expands wider than what we’ve been able to cover here. If you’re interested to learn more or if you have a question in mind, we invite you to <a href="https://www.kudan.io/contact/" target="_blank" rel="noopener">say hi to us</a> for your specific needs around the technology.</p>
<hr />
<h2><strong>References</strong></h2>
<p>[1] Chen, Yang &amp; Medioni, Gerard. (1992) “Object modeling by registration of multiple range images,” Image and Vision Computing, vol. 10, no. 3, pp. 145–155.[<a href="https://www.sciencedirect.com/science/article/abs/pii/026288569290066C" target="_blank" rel="noopener">PDF</a>].<br />
[2] Vysotska, Olga &amp; Stachniss, Cyrill. (2017). Relocalization under Substantial Appearance Changes using Hashing [<a href="https://www.researchgate.net/profile/Cyrill_Stachniss/publication/344351938_Relocalization_under_Substantial_Appearance_Changes_using_Hashing/links/5f6b4252458515b7cf4747c2/Relocalization-under-Substantial-Appearance-Changes-using-Hashing.pdf" target="_blank" rel="noopener">PDF</a>]</p>
<hr />
<p>■For more details, please contact us from <a href="https://www.kudan.io/contact" target="_blank" rel="noopener noreferrer">here</a>.</p><p>The post <a href="https://www.kudan.io/blog/how-to-tune-3d-lidar-slam-parameters/">How to Tune 3D-Lidar SLAM Parameters</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1330</post-id>	</item>
		<item>
		<title>Kudan 3D-Lidar SLAM (KdLidar) in action：Long narrow corridors</title>
		<link>https://www.kudan.io/blog/kudan-3d-lidar-slam-kdlidar-in-action-long-narrow-corridors/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=kudan-3d-lidar-slam-kdlidar-in-action-long-narrow-corridors</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Mon, 02 May 2022 07:30:16 +0000</pubDate>
				<category><![CDATA[Press Release]]></category>
		<category><![CDATA[3D-Lidar]]></category>
		<category><![CDATA[AMR]]></category>
		<category><![CDATA[Autonomous Mobile Robots]]></category>
		<category><![CDATA[KdLidar]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[Kudan Lidar SLAM]]></category>
		<category><![CDATA[localization]]></category>
		<category><![CDATA[Long narrow corridors]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[robotics]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1134</guid>

					<description><![CDATA[<p>Lidar SLAM in long narrow corridors &#8211; deceivingly challenging Long narrow corridors in office buildings and industrial facilities are quite common environments for robotics and mapping applications. However, they are one of the more challenging environments for Lidar SLAM due to repetitive structural appearances and limited GNSS signals (You can learn how 3D Lidar SLAM [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/kudan-3d-lidar-slam-kdlidar-in-action-long-narrow-corridors/">Kudan 3D-Lidar SLAM (KdLidar) in action：Long narrow corridors</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<h2><strong>Lidar SLAM in long narrow corridors &#8211; deceivingly challenging</strong></h2>
<p>Long narrow corridors in office buildings and industrial facilities are quite common environments for robotics and mapping applications. However, they are one of the more challenging environments for Lidar SLAM due to repetitive structural appearances and limited GNSS signals (You can learn how 3D Lidar SLAM works in <a href="http://www.kudan.io/archives/1070" target="_blank" rel="noopener">this blog post</a>). This time, we are showcasing Kudan Lidar SLAM (KdLidar) taking on this challenging environment with ease in this challenging environment using a very simple setup.</p>
<p><img loading="lazy" class="aligncenter wp-image-1135 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/05/USC-office-1.jpg?resize=1920%2C1080" alt="" width="1920" height="1080" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/05/USC-office-1.jpg?w=1920&amp;ssl=1 1920w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/05/USC-office-1.jpg?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/05/USC-office-1.jpg?resize=1024%2C576&amp;ssl=1 1024w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/05/USC-office-1.jpg?resize=768%2C432&amp;ssl=1 768w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/05/USC-office-1.jpg?resize=1536%2C864&amp;ssl=1 1536w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /></p>
<h2><strong>Kudan Lidar SLAM works robustly in these challenging but common environments without any external sensors</strong></h2>
<p>This scan was done only with an Ouster lidar (OS1-32) using its built-in IMU as a handheld scanner shown in the picture below.</p>
<p><span style="text-decoration: underline;"><strong>Kudan Lidar SLAM in action: In long narrow corridors in an office building</strong></span></p>
<p><iframe loading="lazy" title="Kudan Lidar SLAM: In Long Narrow Corridors" width="500" height="281" src="https://www.youtube.com/embed/JVXS6q2KoGE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p>
<p>(The data credit: UCS in Korea)</p>
<p>As you may know, Kudan Lidar SLAM will detect and perform loop closure to optimize the point cloud map, and it indeed detected some loops during this scan but the effects are barely noticeable due to the very limited drift generated during the scan. This is a good indicator that Kudan Lidar SLAM is managing this challenging environment with ease.</p>
<p>This type of environment is challenging yet quite common across various applications.</p>
<ul>
<li>Mapping and tracking for autonomous service robots operating within offices and commercial buildings</li>
<li>Mapping of office buildings for inspection, maintenance, and facilities management</li>
<li>Progress monitoring and documentation of construction worksites</li>
</ul>
<p>Here are some of the details of the environment, and demo parameters.</p>
<ul>
<li>Size of the area: Each corridor runs 50m &#8211; 60m length</li>
<li>Sensor: Ouster OS1-32 3D lidar: Only lidar, without further sensor fusion (however, we can utilize other sensors if needed)</li>
<li>The SLAM video is generated in real-time for progress monitoring and the high-density point cloud map is generated during post-processing</li>
</ul>
<p>Please feel free to reach out to us if you operate within these types of environments or have other challenging environments that you need a solution for. See the difference commercial-grade SLAM can make as part of your solution. We are happy to solve the problems together.</p>
<p><strong>About Kudan Inc.</strong><br />
Kudan (Tokyo Stock Exchange securities code: 4425) is a deep tech research and development company specializing in algorithms for artificial perception (AP). As a complement to artificial intelligence (AI), AP functions allow machines to develop autonomy. Currently, Kudan is using its high-level technical innovation to explore business areas based on its own milestone models established for deep tech which provide wide-ranging impact on several major industrial fields.<br />
For more information, please refer to Kudan’s website at <a href="https://www.kudan.io/" target="_blank" rel="noopener noreferrer">https://www.kudan.io/</a>.</p>
<p>■Company Details<br />
Name: Kudan Inc.<br />
Securities Code: 4425<br />
Representative: CEO Daiu Ko</p>
<p>■For more details, please contact us from <a href="https://www.kudan.io/contact" target="_blank" rel="noopener noreferrer">here</a>.</p><p>The post <a href="https://www.kudan.io/blog/kudan-3d-lidar-slam-kdlidar-in-action-long-narrow-corridors/">Kudan 3D-Lidar SLAM (KdLidar) in action：Long narrow corridors</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1134</post-id>	</item>
		<item>
		<title>Kudan 3D-Lidar SLAM (KdLidar) in action: In a subterranean cave for geospatial applications</title>
		<link>https://www.kudan.io/blog/kudan-3d-lidar-slam-kdlidar-in-action-in-a-subterranean-cave-for-geospatial-applications/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=kudan-3d-lidar-slam-kdlidar-in-action-in-a-subterranean-cave-for-geospatial-applications</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Wed, 30 Mar 2022 06:00:50 +0000</pubDate>
				<category><![CDATA[Tech Blog]]></category>
		<category><![CDATA[3D-Lidar]]></category>
		<category><![CDATA[KdLidar]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[Kudan Lidar SLAM]]></category>
		<category><![CDATA[Lidar]]></category>
		<category><![CDATA[Lidar SLAM]]></category>
		<category><![CDATA[localization]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<category><![CDATA[SLAM]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1064</guid>

					<description><![CDATA[<p>We shared our SLAM in action in typical environments for robotics in the past. We decided to shift gears this time, and share an example of SLAM in a setting that we feel is well suited for lidar and lidar SLAM. It’s a subterranean cave &#8211; limited light, a lot of variable natural features, and [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/kudan-3d-lidar-slam-kdlidar-in-action-in-a-subterranean-cave-for-geospatial-applications/">Kudan 3D-Lidar SLAM (KdLidar) in action: In a subterranean cave for geospatial applications</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>We shared our SLAM in action in typical environments for robotics in the past. We decided to shift gears this time, and share an example of SLAM in a setting that we feel is well suited for lidar and lidar SLAM. It’s a subterranean cave &#8211; limited light, a lot of variable natural features, and no GNSS signals!</p>
<p>This data was collected by a handheld device with a 3D-lidar. As some of you already know, underground environments are one of the more challenging environments for conventional geospatial equipment due to the unavailability of GNSS and complex structure.</p>
<p>The video shows how KdLidar works in this environment to collect data, and create a beautiful point cloud.</p>
<p>Try to guess what sensor we used for this work while you are watching this video!</p>
<p>Here is the video:<br />
<strong>Kudan Lidar SLAM: In an underground cave</strong></p>
<p><iframe loading="lazy" title="Kudan Lidar SLAM: In an underground cave" width="500" height="281" src="https://www.youtube.com/embed/YTpXNtD90RE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></p>
<p>Many existing solutions only allow you to see the output once you store the data and process it offline, and you cannot see how the scanning work is going during the capturing process or immediately after it. This can potentially lead to significant loss of productivity. As you can see, the user is able to have a good understanding of how the scanning is going in real-time (as the point cloud is being generated), and then create a denser and more crisp point cloud in post-processing.</p>
<p>One thing to note, is that this data was collected only using an Ouster OS0-32, without using its internal (or an external IMU, and obviously without GNSS). Lidar SLAM without IMU in a handheld configuration is another major challenge due to continuous natural human motions, which can cause ghosting, blurred points and trajectory drift.</p>
<p>Our partner, who collected this data, was quite satisfied with the final result for their mapping purpose. However, if you want consistently good-looking results, with higher accuracy, we can boost the performance with more tuning and sensor fusion of IMU.</p>
<p>Here are some of the details of the environment, and demo parameters.</p>
<ul>
<li><strong>Size of the area</strong>: 100m x 70m (or 330 ft x 230 ft)</li>
<li><strong>Sensor</strong>: Ouster OS0-32 3D lidar: Only lidar, without further sensor fusion (however, we can utilize other sensors if needed)</li>
<li>The map was generated after successful loop closure</li>
</ul>
<p>We hope you are enjoying this “Kudan SLAM in action” series as you get to see Kudan SLAM’s robustness, accuracy, reliability and flexibility in real environments.</p>
<p><strong>About Kudan Inc.</strong><br />
Kudan (Tokyo Stock Exchange securities code: 4425) is a deep tech research and development company specializing in algorithms for artificial perception (AP). As a complement to artificial intelligence (AI), AP functions allow machines to develop autonomy. Currently, Kudan is using its high-level technical innovation to explore business areas based on its own milestone models established for deep tech which provide wide-ranging impact on several major industrial fields.<br />
For more information, please refer to Kudan’s website at <a href="https://www.kudan.io/" target="_blank" rel="noopener noreferrer">https://www.kudan.io/</a>.</p>
<p>■Company Details<br />
Name: Kudan Inc.<br />
Securities Code: 4425<br />
Representative: CEO Daiu Ko</p>
<p>■For more details, please contact us from <a href="https://www.kudan.io/contact" target="_blank" rel="noopener noreferrer">here</a>.</p><p>The post <a href="https://www.kudan.io/blog/kudan-3d-lidar-slam-kdlidar-in-action-in-a-subterranean-cave-for-geospatial-applications/">Kudan 3D-Lidar SLAM (KdLidar) in action: In a subterranean cave for geospatial applications</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1064</post-id>	</item>
		<item>
		<title>Direct Visual SLAM</title>
		<link>https://www.kudan.io/blog/direct-visual-slam/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=direct-visual-slam</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Wed, 16 Sep 2020 14:34:09 +0000</pubDate>
				<category><![CDATA[Tech Blog]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[KudanSLAM]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[Visual Direct SLAM]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=508</guid>

					<description><![CDATA[<p>In my last article, we looked at feature-based visual SLAM (or indirect visual SLAM), which utilizes a set of keyframes and feature points to construct the world around the sensor(s). This approach initially enabled visual SLAM to run in real-time on consumer-grade computers and mobile devices, but with increasing CPU processing and camera performance with [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/direct-visual-slam/">Direct Visual SLAM</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" class="size-full wp-image-496 aligncenter" src="https://i0.wp.com/www.kudan.io/jp/wp-content/uploads/sites/3/2020/11/03-Artisense.gif?resize=640%2C297&#038;ssl=1" alt="" width="640" height="297" data-recalc-dims="1" /></p>
<p>In my <a href="https://www.kudan.io/archives/433" target="_blank" rel="noopener noreferrer">last article</a>, we looked at feature-based visual SLAM (or indirect visual SLAM), which utilizes a set of keyframes and feature points to construct the world around the sensor(s). This approach initially enabled visual SLAM to run in real-time on consumer-grade computers and mobile devices, but with increasing CPU processing and camera performance with lower noise, the desire for a denser point cloud representation of the world started to become tangible through Direct Photogrammetric SLAM (or Direct SLAM). A denser point cloud would enable a higher-accuracy 3D reconstruction of the world, more robust tracking especially in featureless environments, and changing scenery (from weather and lighting). In this article, we will specifically take a look at the evolution of direct SLAM methods over the last decade, and some interesting trends that have come out of that.</p>
<p>Direct SLAM started with the idea of using all the pixels from camera frame to camera frame to resolve the world around the sensor(s), relying on principles from <a href="https://en.wikipedia.org/wiki/Photogrammetry" target="_blank" rel="noopener noreferrer">photogrammetry</a>. Instead of extracting feature points from the image and keeping track of those feature points in 3D space, direct methods look at some constrained aspects of a pixel (color, brightness, intensity gradient), and track the movement of those pixels from frame to frame. This approach changes the problem being solved from one of minimizing geometric reprojection errors, as in the case of indirect SLAM, to minimizing photometric errors.</p>
<p>The direct visual SLAM solutions we will review are from a monocular (single camera) perspective. Having a stereo camera system will simplify some of the calculations needed to derive depth while providing an accurate scale to the map without extensive calibration.<br />
In the interest of brevity, I’ve linked to some explanations of fundamental concepts that come into play for visual SLAM:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Structure_from_Motion" target="_blank" rel="noopener noreferrer">Structure from motion (SfM)</a></li>
<li><a href="https://en.wikipedia.org/wiki/Motion_estimation" target="_blank" rel="noopener noreferrer">Motion estimation</a></li>
<li><a href="https://en.wikipedia.org/wiki/Optical_flow" target="_blank" rel="noopener noreferrer">Optical flow</a></li>
</ul>
<p>While these ideas help in the deeper understanding of some of the mechanics, we’ll save them for another day.</p>
<h3><b>Dense tracking and mapping (DTAM, 2011)</b></h3>
<p>The DTAM approach was one of the first real-time direct visual SLAM implementations, but it relied heavily on the GPU to make this happen. Grossly simplified, DTAM starts by taking multiple stereo baselines for every pixel until the first keyframe is acquired and an initial depth map with stereo measurements is created. Using this initial map, the camera motion between frames is tracked by comparing the image against the model view generated from the map. With each successive image frame, depth information is estimated for each pixel and optimized by minimizing the total depth energy.<br />
The result is a model with depth information for every pixel, as well as an estimate of camera pose. Since it is tracking every pixel, DTAM produces a much denser depth map, appears to be much more robust in featureless environments, and is better suited for dealing with varying focus and motion blur.<br />
The following clips compare DTAM against Parallel Tracking and Mapping: PTAM, a classic feature-based visual SLAM method.</p>
<p><img loading="lazy" class="size-full wp-image-501 aligncenter" src="https://i0.wp.com/www.kudan.io/jp/wp-content/uploads/sites/3/2020/11/03-DTM-Translation.gif?resize=480%2C272&#038;ssl=1" alt="" width="480" height="272" data-recalc-dims="1" /></p>
<p>With rapid motion, you can see tracking deteriorate as the virtual object placed in the scene jumps around as the tracked feature points try to keep up with the shifting scene (right pane). DTAM on the other hand is fairly stable throughout the sequence since it is tracking the entire scene and not just the detected feature points.</p>
<p><img loading="lazy" class="size-full wp-image-500 aligncenter" src="https://i0.wp.com/www.kudan.io/jp/wp-content/uploads/sites/3/2020/11/03-DTM-Defocus.gif?resize=480%2C269&#038;ssl=1" alt="" width="480" height="269" data-recalc-dims="1" /></p>
<p>Since indirect SLAM relies on detecting sharp features, as the scene’s focus changes, the tracked features disappear and tracking fails. This can occur in systems that have cameras that have variable/auto focus, and when the images blur due to motion.</p>
<p><img loading="lazy" class="size-full wp-image-499 aligncenter" src="https://i0.wp.com/www.kudan.io/jp/wp-content/uploads/sites/3/2020/11/03-DTAM-3D-Reconstruction.gif?resize=480%2C269&#038;ssl=1" alt="" width="480" height="269" data-recalc-dims="1" /></p>
<p>In this instance, you can see the benefits of having a denser map, where an accurate 3D reconstruction of the scene becomes possible.</p>
<p>Source video: <a href="https://www.youtube.com/watch?v=Df9WhgibCQA" target="_blank" rel="noopener noreferrer">https://www.youtube.com/watch?v=Df9WhgibCQA</a></p>
<h3><b>Visual odometry and SLAM</b></h3>
<p>We will start seeing more references to visual odometry (VO) as we move forward, and I want to keep everyone on the same page in terms of terminology. As described in previous articles, visual SLAM is the process of localizing (understanding the current location and pose), and mapping the environment at the same time, using visual sensors. An important technique introduced by indirect visual SLAM (more specifically by Parallel Tracking and Mapping &#8211; PTAM), was parallelizing the tracking, mapping, and optimization tasks on to separate threads, where one thread is tracking, while the others build and optimize the map. For the purposes of this discussion, VO can be considered as focusing on the localization part of SLAM. The VO process will provide inputs that the machine uses to build a map. However, it will need additional functions for map consistency and optimization.</p>
<h3>Large Scale Direct SLAM (LSD-SLAM, 2014)</h3>
<p>Building on earlier work on the utilization of semi-dense depth maps for visual <a href="https://en.wikipedia.org/wiki/Visual_odometry" target="_blank" rel="noopener noreferrer">odometry</a>, Jakob Engel (et al.), proposed the idea of Large Scale Direct SLAM. Instead of using all available pixels, LSD-SLAM looks at high-gradient regions of the scene (particularly edges) and analyzes the pixels within those regions. The idea being that there was very little to track between frames in low gradient or uniform pixel areas to estimate depth. For single cameras, the algorithm uses pixels from keyframes as the baseline for stereo depth calculations.<br />
The following image highlights the regions that have high intensity gradients, which show up as lines or edges, unlike indirect SLAM which typically detects corners and blobs as features.</p>
<p style="text-align: center;"><img loading="lazy" class="size-full wp-image-504 aligncenter" src="https://i0.wp.com/www.kudan.io/jp/wp-content/uploads/sites/3/2020/11/スクリーンショット-2020-11-17-23.14.08.png?resize=846%2C324&#038;ssl=1" alt="" width="846" height="324" data-recalc-dims="1" /><i>Image from Engel’s 2013 paper on “</i><a href="https://jsturm.de/publications/data/engel2013iccv.pdf" target="_blank" rel="noopener noreferrer"><i><span style="font-weight: 400;">Semi-dense visual odometry for monocular camera</span></i></a></p>
<p>To complement the visual odometry into a SLAM solution, a pose-graph and its optimization was introduced, as well as loop closure to ensure map consistency with scale. In the following clip, you can see a semi-dense map being created, and loop closure in action with LSD-SLAM. You can see the map snap together as it connects the ends together when the camera returns to a location it previously mapped.</p>
<p style="text-align: center;"><img loading="lazy" class=" wp-image-502 aligncenter" src="https://i0.wp.com/www.kudan.io/jp/wp-content/uploads/sites/3/2020/11/03-LSD-SLAM.gif?resize=545%2C303&#038;ssl=1" alt="" width="545" height="303" data-recalc-dims="1" />Source video: <a href="https://www.youtube.com/watch?v=GnuQzP3gty4" target="_blank" rel="noopener noreferrer">https://www.youtube.com/watch?v=GnuQzP3gty4</a></p>
<p>With the move towards a semi-dense map, LSD-SLAM was able to move computing back onto the CPU, and thus onto general computing devices including high-end mobile devices. Variations and development upon the original work can be found here: <a href="https://vision.in.tum.de/research/vslam/lsdslam" target="_blank" rel="noopener noreferrer">https://vision.in.tum.de/research/vslam/lsdslam</a></p>
<h3>Semi-direct Visual Odometry (SVO / SVO2, 2014 / 2016)</h3>
<p>In the same year as LSD-SLAM, Forster (et al.) continued to extend visual odometry with the introduction of “Semi-direct visual odometry (SVO)”. SVO takes a step further into using sparser maps with a direct method, but also blurs the line between indirect and direct SLAM. Unlike other direct methods, SVO extracts feature points from keyframes, but uses the direct method to perform frame-to-frame motion estimation on the tracked features. In addition, SVO performs bundle adjustment to optimize the structure and pose. Extracted 2D features have their depth estimated using a probabilistic depth-filter, which becomes a 3D feature that is added to the map once it crosses a given certainty threshold.<br />
The advantages of SVO are that it operates near constant time, and can run at relatively high framerates, with good positional accuracy under fast and variable motion. However, without loop closure or global map optimization SVO provides only the tracking component of SLAM.</p>
<p style="text-align: center;"><img loading="lazy" class=" wp-image-503 aligncenter" src="https://i0.wp.com/www.kudan.io/jp/wp-content/uploads/sites/3/2020/11/03-SVO.gif?resize=513%2C289&#038;ssl=1" alt="" width="513" height="289" data-recalc-dims="1" />Source video: <a href="https://www.youtube.com/watch?v=2YnIMfw6bJY" target="_blank" rel="noopener noreferrer">https://www.youtube.com/watch?v=2YnIMfw6bJY</a></p>
<h3><b>Direct Sparse Odometry (DSO, 2016)</b></h3>
<p>After introducing LSD-SLAM, Engel (et al.) took the next leap in direct SLAM with direct sparse odometry (DSO) &#8211; a direct method with a sparse map. Unlike SVO, DSO does not perform feature-point extraction and relies on the direct photometric method. However, instead of using the entire camera frame, DSO splits the image into regions and then samples pixels from regions with some intensity gradients for tracking. This ensures that these tracked points are spread across the image.<br />
The result of these variations is an elegant direct VO solution. Similar to SVO, the initial implementation wasn’t a complete SLAM solution due to the lack of global map optimization, including loop closure, but the resulting maps had relatively small drift. As you can see in the following clip, the map is slightly misaligned (double vision garbage bins at the end of the clip) without loop closure and global map optimization.</p>
<p><img loading="lazy" class=" wp-image-498 aligncenter" src="https://i0.wp.com/www.kudan.io/jp/wp-content/uploads/sites/3/2020/11/03-DSO-Map.gif?resize=497%2C293&#038;ssl=1" alt="" width="497" height="293" data-recalc-dims="1" /><br />
The following clip shows the differences between DSO, LSD-SLAM, and ORB-SLAM (feature-based) in tracking performance, and unoptimized mapping (no loop closure).</p>
<p><img loading="lazy" class="size-full wp-image-497 aligncenter" src="https://i0.wp.com/www.kudan.io/jp/wp-content/uploads/sites/3/2020/11/03-DSO-Comparison.gif?resize=480%2C268&#038;ssl=1" alt="" width="480" height="268" data-recalc-dims="1" /></p>
<p>You can see LSD-SLAM lose tracking midway through the video, and the ORB-SLAM map suffers from scale drift, which would have been corrected upon loop closure. But it is worth noting that even without loop closure DSO generates a fairly accurate map.</p>
<p>Source video: <a href="https://www.youtube.com/watch?v=C6-xwSOOdqQ" target="_blank" rel="noopener noreferrer">https://www.youtube.com/watch?v=C6-xwSOOdqQ</a></p>
<p>There is continuing work on improving DSO with the inclusion of loop closure and other camera configurations. However, DSO continues to be a leading solution for direct SLAM. The research and extensions of DSO can be found here: <a href="https://vision.in.tum.de/research/vslam/dso" target="_blank" rel="noopener noreferrer">https://vision.in.tum.de/research/vslam/dso</a></p>
<h3><b>Final Words</b></h3>
<p>While the underlying sensor and the camera stayed the same from feature-based indirect SLAM to direct SLAM, we saw how the shift in methodology inspired these diverse problem-solving approaches. We’ve seen the maps go from mostly sparse with indirect SLAM to becoming dense, semi-dense, and then sparse again with the latest algorithms. At the same time, computing requirements have dropped from a high-end computer to a high-end mobile device. It’s important to keep in mind what problem is being solved with any particular SLAM solution, its constraints, and whether its capabilities are best suited for the expected operating environment.</p>
<p><b>For further reading:</b></p>
<ol>
<li>Newcombe, S. Lovegrove, A. Davison, “DTAM: Dense tracking and mapping in real-time,”  (<a href="https://www.doc.ic.ac.uk/~ajd/Publications/newcombe_etal_iccv2011.pdf">PDF</a><a href="https://www.doc.ic.ac.uk/~ajd/Publications/newcombe_etal_iccv2011.pdf">)</a></li>
<li>Engel, J. Sturm, D. Cremers, “Semi-dense visual odometry for a monocular camera, “ (<a href="https://jsturm.de/publications/data/engel2013iccv.pdf">PDF</a>)</li>
<li>Engel, T. Schops, D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” (<a href="https://vision.in.tum.de/_media/spezial/bib/engel14eccv.pdf">PDF)</a></li>
<li>Forster, M. Pizzoli, D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” (<a href="http://rpg.ifi.uzh.ch/docs/ICRA14_Forster.pdf">PDF)</a></li>
<li>Forster, Z. Zhang, M. Gassner, M. Werlberger, D. Scaramuzza, “SVO: Semi-direct visual odometry for monocular and multi-camera systems,” (<a href="http://rpg.ifi.uzh.ch/docs/TRO16_Forster-SVO.pdf">PDF)</a></li>
<li>Engel, V. Koltun, D. Cremers, “Direct Sparse Odometry,” (<a href="https://arxiv.org/pdf/1607.02565.pdf">PDF)</a></li>
</ol><p>The post <a href="https://www.kudan.io/blog/direct-visual-slam/">Direct Visual SLAM</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">508</post-id>	</item>
	</channel>
</rss>
