VacuumCleaner

所属分类:人工智能/神经网络/深度学习
开发工具:C#
文件大小:2608KB
下载次数:0
上传日期:2014-02-13 16:44:39
上 传 者sh-1993
说明:  吸尘器,吸尘器AI玩具问题环境
(VacuumCleaner,Vacuum Cleaner AI toy problem environment)

文件列表:
LICENSE (1069, 2014-02-14)
Plots (0, 2014-02-14)
Plots\dirt_1.png (96571, 2014-02-14)
Plots\dirt_2.png (95261, 2014-02-14)
Plots\dirt_3.png (89073, 2014-02-14)
Plots\dirt_4.png (95175, 2014-02-14)
Plots\energy_1.png (98076, 2014-02-14)
Plots\energy_2.png (94766, 2014-02-14)
Plots\energy_3.png (92049, 2014-02-14)
Plots\energy_4.png (88293, 2014-02-14)
Plots\results.xlsx (52036, 2014-02-14)
RawTex (0, 2014-02-14)
RawTex\FloorTex.jpg (26401, 2014-02-14)
RawTex\brick.blend (555600, 2014-02-14)
RawTex\cleaner.blend (512272, 2014-02-14)
RawTex\cleaner.svg (1864, 2014-02-14)
RawTex\dirt.blend (460408, 2014-02-14)
RawTex\dirt.mtl (53, 2014-02-14)
RawTex\dirt.svg (15128, 2014-02-14)
RawTex\dirt2.blend (468904, 2014-02-14)
RawTex\floor.blend (468508, 2014-02-14)
RawTex\wall.jpg (40376, 2014-02-14)
Skins (0, 2014-02-14)
Skins\Default (0, 2014-02-14)
Skins\Default\Cursors (0, 2014-02-14)
Skins\Default\Cursors\Busy.xnb (77529, 2014-02-14)
Skins\Default\Cursors\Cross.xnb (4375, 2014-02-14)
Skins\Default\Cursors\Default.xnb (4375, 2014-02-14)
Skins\Default\Cursors\DiagonalLeft.xnb (4375, 2014-02-14)
Skins\Default\Cursors\DiagonalRight.xnb (4375, 2014-02-14)
Skins\Default\Cursors\Horizontal.xnb (4375, 2014-02-14)
Skins\Default\Cursors\Move.xnb (4375, 2014-02-14)
Skins\Default\Cursors\Text.xnb (4375, 2014-02-14)
Skins\Default\Cursors\Vertical.xnb (4375, 2014-02-14)
Skins\Default\Fonts (0, 2014-02-14)
Skins\Default\Fonts\Default.xnb (106618, 2014-02-14)
... ...

This project is an environment for experimenting with toy AI problem - Vacuum cleaner world, originally described in book [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/). It's actually a C# adaptation of [this C++ project](http://web.ntnu.edu.tw/~tcchiang/ai/Vacuum%20Cleaner%20World.htm) with more user friendly interface. [Video of environment usage](http://www.youtube.com/watch?v=xZDnrxhIxrM) Problem description ====== In the simple world, the vacuum cleaner agent has a bump sensor and a dirt sensor so that it knows if it hit a wall and whether the current tile is dirty. It can go left, right, up and down, clean dirt, and idle. A performance measure is to maximize the number of clean rooms over a certain period and to minimize energy consumption. The geography of the environment is unknown. At each time step, each room has a certain chance of increasing 1 unit of dirt. * Prior knowledge ------ 1. The environment is a square, surrounded with walls. 2. Each cell is either a wall or a room. 3. The walls are always clean. 4. The agent cannot pass through the wall. 5. The agent can go north, south, east, and west. Each move costs 1 point of energy. 6. The agent can clear dirt, each time decreasing 1 unit of dirt. Each cleaning costs 2 point of energy. 7. The agent can stay idle, costing no energy. * Performance measure ------ Given a period T, the goal is 1. Minimize the sum of amount of dirt in all rooms over T. 2. Minimize the consumed energy. Agents ====== The project contains 3 default agents: * RandomAgent - performs random actions on each iteration. * ModelAgent. Agent works in 2 stages: 1. Map discovery. (2n - 1)x(2n - 1) map is created, where n is width/height of real map and it is assumed that agent is at the center of this map. Agent chooses among neighboring tiles cell which he has not visited yet (call it black) and moves to it. If current tile has no uninvestigated neighbors (it's white), agent searches for shortest path to nearest grey(visited tile but with uninvestigated neighbors) tile, using A* algorithm. Manhattan distance to nearest grey tile from current tile is used as a heuristic for algorithm. If map has no grey tiles left, then all accessible tiles are investigated and first stage of algorithm is finished. All uninvestigated tiles are marked as walls to prevent problems of algorithm on the 2 stage. Left top tile coordinates are determined and map is trimmed to smaller (n * n) map. 2. Regular map traverse. Simple greedy algorithm is used. Agent moves to neighboring tile which was not visited for longest time. It may decide to idle using approximation of dirt respawn time. To approximate time of dirt respawn agent sums all time intervals between dirt clearing and divides it by count of dirt collections. * ModelAgentNoIdle - behaves as the previous one but is not trying to predict when to idle. Renderers ====== For displaying map and agent 2 default renderers are available: * 2D renderer - renders classic 2D tile map * 3D renderer - renders 3D, textured tile map System requirements ====== Windows [XNA 4.0 Refresh](http://msxna.codeplex.com/releases) Microsoft Visual Studio 2010, 2012, 2013 Used libraries/assets ====== [XNA 4.0 Refresh](http://en.wikipedia.org/wiki/Microsoft_XNA) [Neoforce Controls](http://neoforce.codeplex.com/) ([License](http://neoforce.codeplex.com/license)) Some textures from [CG Textures](http://www.cgtextures.com/) Default agents work plots ====== Energy map_1 Dirt map_1 Energy map_2 Dirt map_2 Energy map_3 Dirt map_3 Energy map_4 Dirt map_4

近期下载者

相关文件


收藏者