Theoretical results suggest that deep architectures, composed of multiple levels of non-linear operations, may be necessary to learn complicated functions that can represent high-level abstractions in fields such as vision, language, and other AI-level tasks. These architectures can be seen in neural nets with many hidden layers or in complicated propositional formulae, indicating the importance of depth in learning such complex tasks.
Theoreticalresultssuggestthatinordertolearnthekindofcomplicatedfunctionsthatcanrepresenthigh-levelabstractions(e.g.,invision,language,andotherAI-leveltasks),onemayneeddeeparchitectures.Deeparchitecturesarecomposedofmultiplelevelsofnon-linearoperations,suchasinneuralnetswithmanyhiddenlayersorincomplicatedpropositionalformulaere-usingmanysub-formulae.Searchingtheparameterspaceofdeeparchitecturesisadifficulttask,butlearningalgorithmssuchasthoseforDeepBeliefNetworkshaverecentlybeenproposedtotacklethisproblemwithnotablesuccess,beatingthestateof-the-artincertainareas.Thismonographdiscussesthemotivationsandprinciplesregardinglearningalgorithmsfordeeparchitectures,inparticularthoseexploitingasbuildingblocksunsupervisedlearningofsingle-layermodelssuchasRestrictedBoltzmannMachines,usedtoconstructdeepermodelssuchasDeepBeliefNetworks.
相关推荐
© 2023-2025 百科书库. All Rights Reserved.
发表评价