Homepage of Wolfgang Maass

Wolfgang Maass: Publications

Link to PubMed
Link to Google Scholar
Youtube Lecture
This list is also available as BiBTeX file.

[223]
Z. Jonke, S. Habenschuss, and W. Maass. A theoretical basis for efficient computations with noisy spiking neurons. arXiv.org, arXiv:1412.5862, 2014. (link to the PDF)

[222]
R. Legenstein and W. Maass. Ensembles of spiking neurons with noise support optimal probabilistic inference in a dynamically changing environment. PLOS Computational Biology, 10(10):e1003859, 2014. (Journal link to the PDF)

[221]
W. Maass. Noise as a resource for computation and learning in networks of spiking neurons. Special Issue of the Proc. of the IEEE on "Engineering Intelligent Electronic Systems based on Computational Neuroscience", 102(5):860-880, 2014. (PDF, 1798 KB).

[220]
D. Kappel, B. Nessler, and W. Maass. STDP installs in winner-take-all circuits an online approximation to hidden Markov model learning. PLOS Computational Biology, 10(3):e1003511, 2014. (Journal link to the PDF)

[219]
S. Habenschuss, Z. Jonke, and W. Maass. Stochastic computations in cortical microcircuit models. PLOS Computational Biology, 9(11):e1003311, 2013. (PDF). (Additional technical information PDF)

[218]
S. Klampfl and W. Maass. Emergence of dynamic memory traces in cortical microcircuit models through STDP. The Journal of Neuroscience, 33(28):11515-11529, 2013. (PDF, 3984 KB).

[217]
B. Nessler, M. Pfeiffer, L. Buesing, and W. Maass. Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity. PLOS Computational Biology, 9(4):e1003037, 2013. (Journal link to the PDF)

[216]
S. Habenschuss, H. Puhr, and W. Maass. Emergence of optimal decoding of population codes through STDP. Neural Computation, 25(6):1371-1407, 2013. (PDF, 1105 KB).

[215]
E. A. Rueckert, G. Neumann, M. Toussaint, and W. Maass. Learned graphical models for probabilistic planning provide a new class of movement primitives. Frontiers in Computational Neuroscience, 6:1-20, 2013. doi:10.3389/fncom.2012.00097. (PDF, 1621 KB). (Journal link to the PDF)

[214]
G. M. Hoerzer, R. Legenstein, and Wolfgang Maass. Emergence of complex computational structures from chaotic neural networks through reward-modulated Hebbian learning. Cerebral Cortex, 24:677-690, 2014. (PDF, 1505 KB). (Supplementary material PDF)

[213]
D. Probst, W. Maass, H. Markram, and M. O. Gewaltig. Liquid computing in a simplified model of cortical layer IV: Learning to balance a ball. In Proceedings of the 22nd International Conference on Artificial Neural Networks and Machine Learning -- ICANN 2012, Alessandro E.P. Villa, Wlodzislaw Duch, Peter Erdi, Francesco Masulli, and Günther Palm, editors, volume 7552 of Lecture Notes in Computer Science, pages 209-216. Springer, 2012. (PDF, 451 KB). (Journal link to the PDF)

[212]
H. Hauser, A. J. Ijspeert, R. M. Füchslin, R. Pfeifer, and W. Maass. The role of feedback in morphological computation with compliant bodies. Biological Cybernetics, published 06 Sept 2012. doi: 10.1007/s00422-012-0516-4. (PDF, 1573 KB). (Journal link to the PDF)

[211]
S. Klampfl, S. V. David, P. Yin, S. A. Shamma, and W. Maass. A quantitative analysis of information about past and present stimuli encoded by spikes of A1 neurons. Journal of Neurophysiology, 108:1366-1380, 2012. (PDF, 1045 KB). (Journal link to the abstract PDF)

[210]
M. Pfeiffer, M. Hartbauer, A. B. Lang, W. Maass, and H. Römer. Probing real sensory worlds of receivers with unsupervised clustering. PLoS ONE, 7(6):e37354. doi:10.1371, 2012. (PDF, 5928 KB). (Journal link to the PDF)

[209]
H. Hauser, A. J. Ijspeert, R. M. Füchslin, R. Pfeifer, and W. Maass. Towards a theoretical foundation for morphological computation with compliant bodies. Biological Cybernetics, 105(5-6):355-370, 2011. (PDF, 1649 KB). (Journal link to the PDF)

[208]
D. Pecevski, L. Büsing, and W. Maass. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons. PLoS Computational Biology, 7(12):e1002294, 2011. (Journal link to the PDF)

[207]
L. Büsing, J. Bill, B. Nessler, and W. Maass. Neural dynamics as sampling: A model for stochastic computation in recurrent networks of spiking neurons. PLoS Computational Biology, 7(11):e1002211, 2011. (Journal link to the PDF)

[206]
R. Legenstein and W. Maass. Branch-specific plasticity enables self-organization of nonlinear computation in single neurons. The Journal of Neuroscience, 31(30):10787-10802, 2011. (PDF). (Commentary by R. P. Costa and P. J. Sjöström in Frontiers in Synaptic Neuroscience PDF)

[205]
H. Hauser, G. Neumann, A. J. Ijspeert, and W. Maass. Biologically inspired kinematic synergies enable linear balance control of a humanoid robot. Biological Cybernetics, 104(4-5):235-249, 2011. (PDF, 1581 KB). (Journal link to the PDF)

[204]
M. J. Rasch, K. Schuch, N. K. Logothetis, and W. Maass. Statistical comparision of spike responses to natural stimuli in monkey area V1 with simulated responses of a detailed laminar network model for a patch of V1. Journal of Neurophysiology, 105:757-778, 2011. (PDF, 1929 KB). (Commentary by W.S. Anderson and B. Kreiman in Current Biology 2011 PDF)

[203]
J. Bill, K. Schuch, D. Brüderle, J. Schemmel, W. Maass, and K. Meier. Compensating inhomogeneities of neuromorphic VLSI devices via short-term synaptic plasticity. Frontiers in Computational Neuroscience, 4:1-14, 2010. doi:10.3389/fncom.2010.00129. (PDF, 2131 KB). (Journal link to the PDF)

[202]
S. Klampfl and W. Maass. A theoretical basis for emergent pattern discrimination in neural systems through slow feature extraction. Neural Computation, 22(12):2979-3035, 2010. Epub 2010 Sep 21. (PDF, 1080 KB).

[201]
R. Legenstein, S. M. Chase, A. B. Schwartz, and W. Maass. A reward-modulated Hebbian learning rule can explain experimentally observed network reorganization in a brain control task. The Journal of Neuroscience, 30(25):8400-8410, 2010. (PDF, 718 KB).

[200]
D. Nikolic, S. Haeusler, W. Singer, and W. Maass. Distributed fading memory for stimulus properties in the primary visual cortex. PLoS Biology, 7(12):1-19, 2009. (PDF, 1301 KB).

[199]
R. Legenstein and W. Maass. An integrated learning rule for branch strength potentiation and STDP. 39th Annual Conference of the Society for Neuroscience, Program 895.20, Poster HH36, 2009.

[198]
S. Klampfl, S.V. David, P. Yin, S.A. Shamma, and W. Maass. Integration of stimulus history in information conveyed by neurons in primary auditory cortex in response to tone sequences. 39th Annual Conference of the Society for Neuroscience, Program 163.8, Poster T6, 2009.

[197]
S. Liebe, G. Hoerzer, N.K. Logothetis, W. Maass, and G. Rainer. Long range coupling between V4 and PF in theta band during visual short-term memory. 39th Annual Conference of the Society for Neuroscience, Program 652.20, Poster Y31, 2009.

[196]
S. Haeusler, K. Schuch, and W. Maass. Motif distribution and computational performance of two data-based cortical microcircuit templates. 38th Annual Conference of the Society for Neuroscience, Program 220.9, 2008.

[195]
L. Buesing and W. Maass. A spiking neuron as information bottleneck. Neural Computation, 22:1961-1992, 2010. (PDF, 706 KB).

[194]
M. Pfeiffer, B. Nessler, R. Douglas, and W. Maass. Reward-modulated Hebbian Learning of Decision Making. Neural Computation, 22:1399-1444, 2010. (PDF, 944 KB).

[193]
R. Legenstein, S. A. Chase, A. B. Schwartz, and W. Maass. Functional network reorganization in motor cortex can be explained by reward-modulated Hebbian learning. In Proc. of NIPS 2009: Advances in Neural Information Processing Systems, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, volume 22, pages 1105-1113. MIT Press, 2010. (PDF, 246 KB).

[192]
S. Klampfl and W. Maass. Replacing supervised classification learning by Slow Feature Analysis in spiking neural networks. In Proc. of NIPS 2009: Advances in Neural Information Processing Systems, volume 22, pages 988-996. MIT Press, 2010. (PDF, 1656 KB).

[191]
B. Nessler, M. Pfeiffer, and W. Maass. STDP enables spiking neurons to detect hidden causes of their inputs. In Proc. of NIPS 2009: Advances in Neural Information Processing Systems, volume 22, pages 1357-1365. MIT Press, 2010. (PDF, 203 KB).

[190]
R. Legenstein, S. A. Chase, A. B. Schwartz, and W. Maass. A model for learning effects in motor cortex that may facilitate the brain control of neuroprosthetic devices. 38th Annual Conference of the Society for Neuroscience, Program 517.6, 2008.

[189]
W. Maass. Liquid state machines: Motivation, theory, and applications. In Computability in Context: Computation and Logic in the Real World, B. Cooper and A. Sorbi, editors, pages 275-296. Imperial College Press, 2010. (PDF, 847 KB).

[188]
G. Neumann, W. Maass, and J. Peters. Learning complex motions by sequencing simpler motion templates. In Proc. of the 26th Int. Conf. on Machine Learning (ICML 2009), Montreal, 2009. (PDF, 231 KB).

[187]
A. Steimer, W. Maass, and R. Douglas. Belief-propagation in networks of spiking neurons. Neural Computation, 21:2502-2523, 2009. (PDF, 651 KB).

[186]
D. Buonomano and W. Maass. State-dependent computations: Spatiotemporal processing in cortical networks. Nature Reviews in Neuroscience, 10(2):113-125, 2009. (PDF, 665 KB).

[185]
S. Haeusler, K. Schuch, and W. Maass. Motif distribution, dynamical properties, and computational performance of two data-based cortical microcircuit templates. J. of Physiology (Paris), 103(1-2):73-87, 2009. (PDF, 844 KB).

[184]
B. Nessler, M. Pfeiffer, and W. Maass. Hebbian learning of Bayes optimal decisions. In Proc. of NIPS 2008: Advances in Neural Information Processing Systems, 21, 2009. MIT Press. (PDF, 224 KB).

[183]
R. Legenstein, D. Pecevski, and W. Maass. A learning theory for reward-modulated spike-timing-dependent plasticity with application to biofeedback. PLoS Computational Biology, 4(10):e1000180, 2008. (Journal link to the PDF)

[182]
L. Buesing and W. Maass. Simplified rules and theoretical analysis for information bottleneck optimization and PCA with spiking neurons. In Proc. of NIPS 2007, Advances in Neural Information Processing Systems, volume 20. MIT Press, 2008. (PDF, 394 KB).

[181]
R. Legenstein, D. Pecevski, and W. Maass. Theoretical analysis of learning with reward-modulated spike-timing-dependent plasticity. In Proc. of NIPS 2007, Advances in Neural Information Processing Systems, volume 20, pages 881-888. MIT Press, 2008. (PDF, 199 KB).

[180]
G. Neumann, M. Pfeiffer, and W. Maass. Efficient continuous-time reinforcement learning with adaptive state graphs. In Proceedings of the 18th European Conference on Machine Learning (ECML) and the 11th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD) 2007, Warsaw (Poland). Springer (Berlin), 2007. in press. (PDF, 366 KB).

[179]
S. Klampfl, R. Legenstein, and W. Maass. Spiking neurons can learn to solve information bottleneck problems and extract independent components. Neural Computation, 21(4):911-959, 2009. (PDF, 1088 KB).

[178]
W. Maass. Liquid computing. In Proceedings of the Conference CiE'07: COMPUTABILITY IN EUROPE 2007, Siena (Italy), Lecture Notes in Computer Science, pages 507-516. Springer (Berlin), 2007. (PDF, 547 KB).

[177]
S. Haeusler, W. Singer, W. Maass, and D. Nikolic. Superposition of information in large ensembles of neurons in primary visual cortex. 37th Annual Conference of the Society for Neuroscience, Program 176.2, Poster II23, 2007.

[176]
D. Sussillo, T. Toyoizumi, and W. Maass. Self-tuning of neural circuits through short-term synaptic plasticity. Journal of Neurophysiology, 97:4079-4095, 2007. (PDF, 1504 KB).

[175]
H. Hauser, G. Neumann, A. J. Ijspeert, and W. Maass. Biologically inspired kinematic synergies provide a new paradigm for balance control of humanoid robots. In Proceedings of the IEEE-RAS 7th International Conference on Humanoid Robots (Humanoids 2007), 2007. Best Paper Award. http://planning.cs.cmu.edu/humanoids07/p/37.pdf. (PDF, 671 KB).

[174]
H. Jaeger, W. Maass, and J. Principe. Special issue on echo state networks and liquid state machines. Neural Networks, 20(3):287-289, 2007. (PDF, 128 KB).

[173]
M. J. Rasch, A. Gretton, Y. Murayama, W. Maass, and N. K. Logothetis. Inferring spike trains from local field potentials. Journal of Neurophysiology, 99:1461-1476, 2008. (PDF, 873 KB).

[172]
S. Klampfl, R. Legenstein, and W. Maass. Information bottleneck optimization and independent component extraction with spiking neurons. In Proc. of NIPS 2006, Advances in Neural Information Processing Systems, volume 19, pages 713-720. MIT Press, 2007. (PDF, 613 KB).

[171]
D. Nikolic, S. Haeusler, W. Singer, and W. Maass. Temporal dynamics of information content carried by neurons in the primary visual cortex. In Proc. of NIPS 2006, Advances in Neural Information Processing Systems, volume 19, pages 1041-1048. MIT Press, 2007. (PDF, 176 KB).

[170]
R. Legenstein and W. Maass. On the classification capability of sign-constrained perceptrons. Neural Computation, 20(1):288-309, 2008. (PDF, 671 KB).

[169]
W. Maass. Book review of "Imitation of life: how biology is inspiring computing" by Nancy Forbes. Pattern Analysis and Applications, 8(4):390-391, 2006. Springer (London). (PDF, 105 KB).

[168]
W. Maass, P. Joshi, and E. D. Sontag. Computational aspects of feedback in neural circuits. PLoS Computational Biology, 3(1):e165, 2007. (Journal link to the PDF)

[167]
K. Uchizawa, R. Douglas, and W. Maass. Energy complexity and entropy of threshold circuits. In Proceedings of the 33rd International Colloquium on Automata, Languages and Programming, ICALP (1) 2006, Venice, Italy, July 10-14, 2006, Part I, M. Bugliesi, B. Preneel, V. Sassone, and I. Wegener, editors, volume 4051 of Lecture Notes in Computer Science, pages 631-642. Springer, 2006. (PDF, 1790 KB).

[166]
R. Legenstein and W. Maass. Edge of chaos and prediction of computational performance for neural circuit models. Neural Networks, 20(3):323-334, 2007. (PDF, 1480 KB).

[165]
R. Legenstein and W. Maass. What makes a dynamical system computationally powerful?. In New Directions in Statistical Signal Processing: From Systems to Brains, S. Haykin, J. C. Principe, T.J. Sejnowski, and J.G. McWhirter, editors, pages 127-154. MIT Press, 2007. (PDF, 582 KB).

[164]
W. Maass, P. Joshi, and E. D. Sontag. Principles of real-time computing with feedback applied to cortical microcircuit models. In Advances in Neural Information Processing Systems, Y. Weiss, B. Schoelkopf, and J. Platt, editors, volume 18, pages 835-842. MIT Press, 2006. (PDF, 806 KB).

[163]
K. Uchizawa, R. Douglas, and W. Maass. On the computational power of threshold circuits with sparse activity. Neural Computation, 18(12):2994-3008, 2006. (PDF, 111 KB).

[162]
S. Haeusler and W. Maass. A statistical analysis of information processing properties of lamina-specific cortical microcircuit models. Cerebral Cortex, 17(1):149-162, 2007. (PDF, 889 KB).

[161]
R. Legenstein and W. Maass. A criterion for the convergence of learning with spike timing dependent plasticity. In Advances in Neural Information Processing Systems, Y. Weiss, B. Schoelkopf, and J. Platt, editors, volume 18, pages 763-770. MIT Press, 2006. (PDF, 194 KB).

[160]
W. Maass, R. Legenstein, and N. Bertschinger. Methods for estimating the computational power and generalization capability of neural microcircuits. In Advances in Neural Information Processing Systems, L. K. Saul, Y. Weiss, and L. Bottou, editors, volume 17, pages 865-872. MIT Press, 2005. (PDF, 196 KB).

[159]
Y. Fregnac, M. Blatow, J.-P. Changeux, J. de Felipe, A. Lansner, W. Maass, D. A. McCormick, C. M. Michel, H. Monyer, E. Szathmary, and R. Yuste. UPs and DOWNs in cortical computation. In The Interface between Neurons and Global Brain Function, S. Grillner and A. M. Graybiel, editors, Dahlem Workshop Report 93, pages 393-433. MIT Press, 2006. (PDF, 606 KB).

[158]
P. Joshi and W. Maass. Movement generation with circuits of spiking neurons. Neural Computation, 17(8):1715-1738, 2005. (PDF, 1156 KB).

[157]
W. Maass and H. Markram. Theory of the computational function of microcircuit dynamics. In The Interface between Neurons and Global Brain Function, S. Grillner and A. M. Graybiel, editors, Dahlem Workshop Report 93, pages 371-390. MIT Press, 2006. (PDF, 402 KB).

[156]
A. Kaske and W. Maass. A model for the interaction of oscillations and pattern generation with real-time computing in generic neural microcircuit models. Neural Networks, 19(5):600-609, 2006. (PDF, 832 KB).

[155]
O. Melamed, W. Gerstner, W. Maass, M. Tsodyks, and H. Markram. Coding and learning of behavioral sequences. Trends in Neurosciences, 27(1):11-14, 2004. (PDF, 105 KB).

[154]
R. Legenstein, C. Naeger, and W. Maass. What can a neuron learn with spike-timing-dependent plasticity?. Neural Computation, 17(11):2337-2382, 2005. (PDF, 549 KB).

[153]
T. Natschlaeger and W. Maass. Dynamics of information and emergent computation in generic neural microcircuit models. Neural Networks, 18(10):1301-1308, 2005. (PDF, 273 KB).

[151]
P. Joshi and W. Maass. Movement generation and control with generic neural microcircuits. In Biologically Inspired Approaches to Advanced Information Technology. First International Workshop, BioADIT 2004, Lausanne, Switzerland, January 2004, Revised Selected Papers, A. J. Ijspeert, M. Murata, and N. Wakamiya, editors, volume 3141 of Lecture Notes in Computer Science, pages 258-273. Springer Verlag, 2004. (PDF, 596 KB).

[150]
T. Natschlaeger and W. Maass. Information dynamics and emergent computation in recurrent circuits of spiking neurons. In Proc. of NIPS 2003, Advances in Neural Information Processing Systems, S. Thrun, L. Saul, and B. Schoelkopf, editors, volume 16, pages 1255-1262, Cambridge, 2004. MIT Press. (PDF, 180 KB).

[149]
W. Maass, T. Natschlaeger, and H. Markram. Computational models for generic cortical microcircuits. In Computational Neuroscience: A Comprehensive Approach, J. Feng, editor, chapter 18, pages 575-605. Chapman & Hall/CRC, Boca Raton, 2004. (PDF, 863 KB).

[148]
W. Maass, T. Natschlaeger, and H. Markram. Fading memory and kernel properties of generic cortical microcircuit models. Journal of Physiology -- Paris, 98(4-6):315-330, 2004. (PDF, 576 KB).

[147]
W. Maass, T. Natschlaeger, and H. Markram. A model for real-time computation in generic neural microcircuits. In Proc. of NIPS 2002, Advances in Neural Information Processing Systems, S. Becker, S. Thrun, and K. Obermayer, editors, volume 15, pages 229-236. MIT Press, 2003. (PDF, 254 KB).

[146]
W. Maass, R. Legenstein, and H. Markram. A new approach towards vision suggested by biologically realistic neural microcircuit models. In Biologically Motivated Computer Vision. Proc. of the Second International Workshop, BMCV 2002, Tuebingen, Germany, November 22-24, 2002, H. H. Buelthoff, S. W. Lee, T. A. Poggio, and C. Wallraven, editors, volume 2525 of Lecture Notes in Computer Science, pages 282-293. Springer (Berlin), 2002. (PDF, 238 KB).

[145]
W. Maass. On the computational power of neural microcircuit models: Pointers to the literature. In Proc. of the International Conference on Artificial Neural Networks -- ICANN 2002, José R. Dorronsoro, editor, volume 2415 of Lecture Notes in Computer Science, pages 254-256. Springer, 2002. (PDF, 66 KB).

[144]
T. Natschlaeger, H. Markram, and W. Maass. Computer models and analysis tools for neural microcircuits. In Neuroscience Databases. A Practical Guide, R. Koetter, editor, chapter 9, pages 121-136. Kluwer Academic Publishers (Boston), 2003. (PDF, 230 KB).

[143]
T. Natschlaeger, W. Maass, and H. Markram. The "liquid computer": A novel strategy for real-time computing on time series. Special Issue on Foundations of Information Processing of TELEMATIK, 8(1):39-43, 2002. (PDF, 277 KB).

[141]
W. Maass. Computing with spikes. Special Issue on Foundations of Information Processing of TELEMATIK, 8(1):32-36, 2002. (PDF, 330 KB).

[140]
R. Legenstein, H. Markram, and W. Maass. Input prediction and autonomous movement analysis in recurrent circuits of spiking neurons. Reviews in the Neurosciences (Special Issue on Neuroinformatics of Neural and Artificial Computation), 14(1-2):5-19, 2003. (PDF, 179 KB).

[139]
Peter L. Bartlett and W. Maass. Vapnik-Chervonenkis dimension of neural nets. In The Handbook of Brain Theory and Neural Networks, M. A. Arbib, editor, pages 1188-1192. MIT Press (Cambridge), 2nd edition, 2003. (PDF, 134 KB).

[138]
W. Maass and H. Markram. Temporal integration in recurrent microcircuits. In The Handbook of Brain Theory and Neural Networks, M. A. Arbib, editor, pages 1159-1163. MIT Press (Cambridge), 2nd edition, 2003. (PDF, 249 KB).

[137]
S. Haeusler, H. Markram, and W. Maass. Perspectives of the high-dimensional dynamics of neural microcircuits from the point of view of low-dimensional readouts. Complexity (Special Issue on Complex Adaptive Systems), 8(4):39-50, 2003. (PDF, 183 KB).

[136]
T. Natschlaeger and W. Maass. Spiking neurons and the induction of finite state machines. Theoretical Computer Science: Special Issue on Natural Computing, 287:251-265, 2002. (PDF, 250 KB).

[135]
W. Maass and H. Markram. On the computational power of circuits of spiking neurons. Journal of Computer and System Sciences, 69(4):593-616, 2004. (PDF, 355 KB).

[134]
R. A. Legenstein and W. Maass. Optimizing the layout of a balanced tree. Technical Report, 2001. (Gzipped PostScript, 22 p., 93 KB). (PDF, 247 KB).

[133]
R. A. Legenstein and W. Maass. Neural circuits for pattern recognition with small total wire length. Theoretical Computer Science, 287:239-249, 2002. (Gzipped PostScript, 18 p., 51 KB). (PDF, 129 KB).

[132]
R. A. Legenstein and W. Maass. Wire length as a circuit complexity measure. Journal of Computer and System Sciences, 70:53-72, 2005. (PDF, 372 KB).

[131]
G. Steinbauer, R. Koholka, and W. Maass. A very short story about autonomous robots. Special Issue on Foundations of Information Processing of TELEMATIK, 8(1):26-29, 2002. (PDF, 363 KB).

[130]
W. Maass, T. Natschlaeger, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11):2531-2560, 2002. (PDF, 1993 KB).

[129a]
W. Maass. wetware (English version). In TAKEOVER: Who is Doing the Art of Tomorrow (Ars Electronica 2001), pages 148-152. Springer, 2001. (PDF, 374 KB).

[129b]
W. Maass. wetware (deutsche Version). In TAKEOVER: Who is Doing the Art of Tomorrow (Ars Electronica 2001), pages 153-157. Springer, 2001. (PDF, 381 KB).

[128]
W. Maass, G. Steinbauer, and R. Koholka. Autonomous fast learning in a mobile robot. In Sensor Based Intelligent Robots. International Workshop, Dagstuhl Castle, Germany, October 15-25, 2000, Selected Revised Papers, G. D. Hager, H. I. Christensen, H. Bunke, and R. Klein, editors, volume 2238 of lncs, pages 345-356, 2002. (PDF, 381 KB).

[127]
P. Auer, H. Burgsteiner, and W. Maass. Reducing communication for distributed learning in neural networks. In Proc. of the International Conference on Artificial Neural Networks -- ICANN 2002, José R. Dorronsoro, editor, volume 2415 of Lecture Notes in Computer Science, pages 123-128. Springer, 2002. (Gzipped PostScript, 6 p., 117 KB). (PDF, 101 KB).

[126]
P. Auer, H. Burgsteiner, and W. Maass. A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Neural Networks, 21(5):786-795, 2008. (PDF, 468 KB).

[125]
T. Natschlaeger and W. Maass. Computing the optimally fitted spike train for a synapse. Neural Computation, 13(11):2477-2494, 2001. (Gzipped PostScript, 15 p., 203 KB). (PDF, 176 KB).

[124]
T. Natschlaeger, W. Maass, and A. Zador. Efficient temporal processing with biologically realistic dynamic synapses. Network: Computation in Neural Systems, 12:75-87, 2001. (Gzipped PostScript, 14 p., 109 KB). (PDF, 213 KB).

[123a]
W. Maass. Neural computation: a research topic for theoretical computer science? Some thoughts and pointers. In Current Trends in Theoretical Computer Science, Entering the 21th Century, Rozenberg G., Salomaa A., and Paun G., editors, pages 680-690. World Scientific Publishing, 2001. (Gzipped PostScript, 11 p., 119 KB). (PDF, 223 KB).

[123b]
W. Maass. Neural computation: a research topic for theoretical computer science? Some thoughts and pointers. In Bulletin of the European Association for Theoretical Computer Science (EATCS), volume 72, pages 149-158, 2000.

[122]
R. A. Legenstein and W. Maass. Foundations for a circuit complexity theory of sensory processing. In Proc. of NIPS 2000, Advances in Neural Information Processing Systems, T. K. Leen, T. G. Dietterich, and V. Tresp, editors, volume 13, pages 259-265, Cambridge, 2001. MIT Press. (Gzipped PostScript, 7 p., 40 KB). (PDF, 94 KB). The poster presented at NIPS is available as gzipped Postscript.

[121]
T. Natschlaeger and W. Maass. Finding the key to a synapse. In Advances in Neural Information Processing Systems (NIPS '2000), Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, volume 13, pages 138-144, Cambridge, 2001. MIT Press. (Gzipped PostScript, 7 p., 66 KB). (PDF, 124 KB). The poster presented at NIPS is available as Acrobat PDF file.

[120]
W. Maass, A. Pinz, R. Braunstingl, G. Wiesspeiner, T. Natschlaeger, O. Friedl, and H. Burgsteiner. Konstruktion von lernfaehigen mobilen Robotern im Studentenwettbewerb ``Robotik 2000'' an der Technischen Universitaet Graz. in: Telematik, pages 20-24, 2000. (Gzipped PostScript, 8 p., 83 KB). (PDF, 557 KB).

[119]
W. Maass and H. Markram. Synapses as dynamic memory buffers. Neural Networks, 15:155-161, 2002. (Gzipped PostScript, 12 p., 381 KB). (PDF, 216 KB).

[118]
W. Maass. Spike trains -- im Rhythmus neuronaler Zellen. In Katalog der steirischen Landesausstellung gr2000az, R. Kriesche H. Konrad, editor, pages 36-42. Springer Verlag, 2000.

[117]
W. Maass. Lernende Maschinen. In Katalog der steirischen Landesausstellung gr2000az, R. Kriesche H. Konrad, editor, pages 50-56. Springer Verlag, 2000.

[116]
W. Maass. Neural computation with winner-take-all as the only nonlinear operation. In Advances in Information Processing Systems, Sara A. Solla, Todd K. Leen, and Klaus-Robert Mueller, editors, volume 12, pages 293-299. MIT Press (Cambridge), 2000. (Gzipped PostScript, 7 p., 85 KB). (PDF, 75 KB).

[115]
T. Natschlaeger and W. Maass. Fast analog computation in networks of spiking neurons using unreliable synapses. In ESANN'99 Proceedings of the European Symposium on Artificial Neural Networks, pages 417-422, Bruges, Belgium, 1999. (Gzipped PostScript, 6 p., 79 KB). (PDF, 180 KB).

[114]
W. Maass. Computation with spiking neurons. In The Handbook of Brain Theory and Neural Networks, M. A. Arbib, editor, pages 1080-1083. MIT Press (Cambridge), 2nd edition, 2003. (Gzipped PostScript, 17 p., 72 KB). (PDF, 170 KB).

[113]
W. Maass. On the computational power of winner-take-all. Neural Computation, 12(11):2519-2535, 2000. (Gzipped PostScript, 19 p., 160 KB). (PDF, 98 KB).

[112]
W. Maass and T. Natschlaeger. Emulation of Hopfield networks with spiking neurons in temporal coding. In Computational Neuroscience: Trends in Research, J. M. Bower, editor, pages 221-226. Plenum Press, 1998. (Gzipped PostScript, 7 p., 82 KB). (PDF, 187 KB).

[111]
T. Natschlaeger, W. Maass, E. D. Sontag, and A. Zador. Processing of time series by neural circuits with biologically realistic synaptic dynamics. In Advances in Neural Information Processing Systems 2000 (NIPS '2000), Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, volume 13, pages 145-151, Cambridge, 2001. MIT Press. (Gzipped PostScript, 7 p., 60 KB). (PDF, 133 KB). The poster presented at NIPS is available as Acrobat PDF file.

[110]
W. Maass. Paradigms for computing with spiking neurons. In Models of Neural Networks. Early Vision and Attention, J. L. van Hemmen, J. D. Cowan, and E. Domany, editors, volume 4, chapter 9, pages 373-402. Springer (New York), 2002. (Gzipped PostScript, 31 p., 290 KB). (PDF, 570 KB).

[109]
W. Maass and E. D. Sontag. A precise characterization of the class of languages recognized by neural nets under Gaussian and other common noise distributions. In Advances in Neural Information Processing Systems, M. S. Kearns, S. S. Solla, and D. A. Cohn, editors, volume 11, pages 281-287. MIT Press (Cambridge), 1999. (Gzipped PostScript, 7 p., 45 KB). (PDF, 108 KB).

[108]
W. Maass. Das menschliche Gehirn -- nur ein Rechner?. In Zur Kunst des Formalen Denkens, R. E. Burkard, W. Maass, and P. Weibel, editors, pages 209-233. Passagen Verlag (Wien), 2000. (Gzipped PostScript, 20 p., 153 KB). (PDF, 206 KB).

[107]
W. Maass and E. D. Sontag. Neural systems as nonlinear filters. Neural Computation, 12(8):1743-1772, 2000. (Gzipped PostScript, 26 p., 107 KB). (PDF, 172 KB).

[106]
P. Auer and W. Maass. Introduction to the special issue on computational learning theory. Algorithmica, 22(1/2):1-2, 1998. (PDF, 18 KB).

[105]
W. Maass. Spiking neurons. In Proceedings of the ICSC/IFAC Symposium on Neural Computation 1998 (NC'98), pages 16-20. ICSC Academic Press (Alberta), 1998. Invited talk.

[104]
W. Maass. Models for fast analog computation with spiking neurons. In Proc. of the International Conference on Neural Information Processing 1998 (ICONIP'98) in Kytakyusyu, Japan, pages 187-188. IOS Press (Amsterdam), 1998. Invited talk at the special session on ``Dynamic Brain''.

[103]
W. Maass. On the role of time and space in neural computation. In Proc. of the Federated Conference of CLS'98 and MFCS'98, Mathematical Foundations of Computer Science 1998, volume 1450 of Lecture Notes in Computer Science, pages 72-83. Springer (Berlin), 1998. Invited talk. (Gzipped PostScript, 14 p., 188 KB). (PDF, 729 KB).

[102]
W. Maass and T. Natschlaeger. A model for fast analog computation based on unreliable synapses. Neural Computation, 12(7):1679-1704, 2000. (Gzipped PostScript, 26 p., 211 KB). (PDF, 1304 KB).

[101]
W. Maass and A. Zador. Computing and learning with dynamic synapses. In Pulsed Neural Networks, W. Maass and C. Bishop, editors, pages 321-336. MIT-Press (Cambridge), 1998. (Gzipped PostScript, 16 p., 516 KB). (PDF, 869 KB).

[100]
W. Maass. Computing with spiking neurons. In Pulsed Neural Networks, W. Maass and C. M. Bishop, editors, pages 55-85. MIT Press (Cambridge), 1999. (Gzipped PostScript, 31 p., 666 KB). (PDF, 771 KB).

[99]
W. Maass and T. Natschlaeger. Associative memory with networks of spiking neurons in temporal coding. In Neuromorphic Systems: Engineering Silicon from Neurobiology, L. S. Smith and A. Hamilton, editors, pages 21-32. World Scientific, 1998. (Gzipped PostScript, 13 p., 103 KB). (PDF, 253 KB).

[98]
W. Maass and B. Ruf. On computation with pulses. Information and Computation, 148:202-218, 1999. (Gzipped PostScript, 20 p., 164 KB). (PDF, 196 KB).

[97a]
W. Maass. On the relevance of time in neural computation and learning. Theoretical Computer Science, 261:157-178, 2001. (PDF, 274 KB).

[97b]
W. Maass. On the relevance of time in neural computation and learning. In Proc. of the 8th International Conference on Algorithmic Learning Theory in Sendai (Japan), M. Li and A. Maruoka, editors, volume 1316 of Lecture Notes in Computer Science, pages 364-384. Springer (Berlin), 1997. (Gzipped PostScript, 24 p., 212 KB). (PDF, 410 KB).

[96]
W. Maass and M. Schmitt. On the complexity of learning for spiking neurons with temporal coding. Information and Computation, 153:26-46, 1999. (Gzipped PostScript, 24 p., 136 KB). (PDF, 267 KB).

[95]
W. Maass and E. Sontag. Analog neural nets with Gaussian or other common noise distributions cannot recognize arbitrary regular languages. Neural Computation, 11:771-782, 1999. (Gzipped PostScript, 12 p., 104 KB). (PDF, 109 KB).

[94a]
W. Maass and A. M. Zador. Dynamic stochastic synapses as computational units. Neural Computation, 11(4):903-917, 1999. (Gzipped PostScript, 18 p., 223 KB). (PDF, 228 KB).

[94b]
W. Maass and A. M. Zador. Dynamic stochastic synapses as computational units. In Advances in Neural Processing Systems, volume 10, pages 194-200. MIT Press (Cambridge), 1998. (Gzipped PostScript, 571 KB). (PDF, 624 KB).

[93]
W. Maass and T. Natschlaeger. Networks of spiking neurons can emulate arbitrary Hopfield nets in temporal coding. Network: Computation in Neural Systems, 8(4):355-371, 1997. (Gzipped PostScript, 19 p., 188 KB). (PDF, 433 KB).

[92]
W. Maass and M. Schmitt. On the complexity of learning for a spiking neuron. In Proc. of the 10th Conference on Computational Learning Theory 1997, pages 54-61. ACM-Press (New York), 1997. See also Electronic Proc. of the Fifth International Symposium on Artificial Intelligence and Mathematics (http://rutcor.rutgers.edu/~amai). (PDF, 1323 KB).

[91]
W. Maass. A simple model for neural computation with firing rates and firing correlations. Network: Computation in Neural Systems, 9(3):381-397, 1998. (PDF, 288 KB).

[90]
W. Maass. Noisy spiking neurons with temporal coding have more computational power than sigmoidal neurons. In Advances in Neural Information Processing Systems, M. Mozer, M. I. Jordan, and T. Petsche, editors, volume 9, pages 211-217. MIT Press (Cambridge), 1997. (Gzipped PostScript, 13 p., 161 KB). (PDF, 389 KB).

[89]
W. Maass. Analog computations with temporal coding in networks of spiking neurons. In Spatiotemporal Models in Biological and Artificial Systems, F. L. Silva, editor, pages 97-104. IOS-Press, 1997.

[88]
W. Maass and P. Weibel. Ist die Vertreibung der Vernunft reversibel? Ueberlegungen zu einem Wissenschafts- und Medienzentrum. In Jenseits von Kunst, P. Weibel, editor, pages 745-747. Passagen Verlag, 1997. (Gzipped PostScript, 9 p., 18 KB). (PDF, 52 KB).

[87a]
W. Maass and P. Orponen. On the effect of analog noise in discrete-time analog computations. Neural Computation, 10:1071-1095, 1998. (Gzipped PostScript, 19 p., 109 KB). (PDF, 163 KB).

[87b]
W. Maass and P. Orponen. On the effect of analog noise in discrete-time analog computations. In Advances in Neural Information Processing Systems, M. Mozer, M. I. Jordan, and T. Petsche, editors, volume 9, pages 218-224. MIT Press (Cambridge), 1997. (Gzipped PostScript, 7 p., 71 KB). (PDF, 180 KB).

[85a]
W. Maass. Networks of spiking neurons: the third generation of neural network models. Neural Networks, 10:1659-1671, 1997. (Gzipped PostScript, 27 p., 205 KB). (PDF, 1308 KB).

[85b]
W. Maass. Networks of spiking neurons: the third generation of neural network models. In Proc. of the 7th Australian Conference on Neural Networks 1996 in Canberra, Australia, pages 1-10, 1996. (PDF, 778 KB).

[84]
P. Auer, S. Kwek, W. Maass, and M. K. Warmuth. Learning of depth two neural nets with constant fan-in at the hidden nodes. In Proc. of the 9th Conference on Computational Learning Theory 1996, pages 333-343. ACM-Press (New York), 1996. (Gzipped PostScript, 12 p., 101 KB). (PDF, 256 KB).

[83]
W. Maass. A model for fast analog computations with noisy spiking neurons. In Computational Neuroscience: Trends in research, James Bower, editor, pages 123-127, 1997. (Gzipped PostScript, 6 p., 45 KB). (PDF, 113 KB).

[82]
W. Maass. Fast sigmoidal networks via spiking neurons. Neural Computation, 9:279-304, 1997. (Gzipped PostScript, 27 p., 207 KB). (PDF, 2139 KB).

[81]
W. Maass. Neuronale Netze und maschinelles Lernen am Institut fuer Grundlagen der Informationsverarbeitung an der Technischen Universitaet Graz. Telematik, 2:53-60, 1995. (PDF, 2024 KB).

[80]
W. Maass. On the computational power of noisy spiking neurons. In Advances in Neural Information Processing Systems, D. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors, volume 8, pages 211-217. MIT Press (Cambridge), 1996. (Gzipped PostScript, 9 p., 90 KB). (PDF, 579 KB).

[79]
W. Maass and B. Ruf. On the relevance of the shape of postsynaptic potentials for the computational power of networks of spiking neurons. In Proc. of the International Conference on Artificial Neural Networks ICANN, pages 515-520, Paris, 1995. EC2&Cie. (Gzipped PostScript, 6 p., 35 KB). (PDF, 152 KB).

[78]
W. Maass and G. Turan. On learnability and predicate logic (extended abstract). In Proc. of the 4th Bar-Ilan Symposium on Foundations of Artificial Intelligence (BISFAI'95), pages 126-136, Jerusalem, 1995. (PDF, 541 KB).

[77]
P. Auer, R. C. Holte, and W. Maass. Theory and applications of agnostic PAC-learning with small decision trees. In Proc. of the 12th International Machine Learning Conference, Tahoe City (USA), pages 21-29. Morgan Kaufmann (San Francisco), 1995. (Gzipped PostScript, 14 p., 64 KB). (PDF, 219 KB).

[76]
W. Maass. Analog computations on networks of spiking neurons (extended abstract). In Proc. of the 7th Italian Workshop on Neural Nets 1995, pages 99-104. World Scientific (Singapore), 1996. (PDF, 170 KB).

[75]
W. Maass. Lower bounds for the computational power of networks of spiking neurons. Neural Computation, 8(1):1-40, 1996. (Gzipped PostScript, 39 p., 337 KB). (PDF, 2234 KB).

[74]
D. P. Dobkin, D. Gunopulos, and W. Maass. Computing the maximum bichromatic discrepancy, with applications to computer graphics and machine learning. Journal of Computer and System Sciences, 52(3):453-470, June 1996. (Gzipped PostScript, 38 p., 152 KB). (PDF, 813 KB).

[73a]
W. Maass and M. Warmuth. Efficient learning with virtual threshold gates. Information and Computation, 141(1):66-83, 1998. (Gzipped PostScript, 14 p., 54 KB). (PDF, 439 KB).

[73b]
W. Maass and M. Warmuth. Efficient learning with virtual threshold gates. In Proc. of the 12th International Machine Learning Conference, Tahoe City, USA, Morgan Kaufmann (San Francisco), editor, pages 378-386, 1995.

[72]
W. Maass. On the computational complexity of networks of spiking neurons. In Advances in Neural Information Processing Systems, G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, volume 7, pages 183-190. MIT Press (Cambridge), 1995. (Gzipped PostScript, 19 p., 63 KB). (PDF, 209 KB).

[71]
W. Maass. On the complexity of learning on neural nets. In Computational Learning Theory: EuroColt'93, J. Shawe-Taylor and M. Anthony, editors, pages 1-17. Oxford University Press (Oxford), 1994. (Gzipped PostScript, 17 p., 63 KB). (PDF, 237 KB).

[70]
W. Maass. Efficient agnostic PAC-learning with simple hypotheses. In Proc. of the 7th Annual ACM Conference on Computational Learning Theory, pages 67-75, 1994. (Gzipped PostScript, 9 p., 53 KB). (PDF, 170 KB).

[69]
W. Maass. Computing on analog neural nets with arbitrary real weights. In Theoretical Andvances in Neural Computation and Learning, V. P. Roychowdhury, K. Y. Siu, and A. Orlitsky, editors, pages 153-172. Kluwer Academics Publisher (Boston), 1994. (Gzipped PostScript, 17 p., 102 KB). (PDF, 256 KB).

[68]
W. Maass. Vapnik-Chervonenkis dimension of neural nets. In The Handbook of Brain Theory and Neural Networks, M. A. Arbib, editor, pages 1000-1003. MIT Press (Cambridge), 1995. (Gzipped PostScript, 10 p., 43 KB). (PDF, 163 KB).

[67]
W. Maass. Perspectives of current research about the complexity of learning on neural nets. In Theoretical Advances in Neural Computation and Learning, V. P. Roychowdhury, K. Y. Siu, and A. Orlitsky, editors, pages 295-336. Kluwer Academic Publishers (Boston), 1994. (Gzipped PostScript, 37 p., 115 KB). (PDF, 356 KB).

[66a]
W. Maass. Neural nets with superlinear VC-dimension. Neural Computation, 6:877-884, 1994. (Gzipped PostScript, 9 p., 43 KB). (PDF, 448 KB).

[66b]
W. Maass. Neural nets with superlinear VC-dimension. In Proceedings of the International Conference on Artificial Neural Networks 1994 (ICANN'94), pages 581-584. Springer (Berlin), 1994. (PDF, 631 KB).

[65a]
W. Maass. Agnostic PAC-learning of functions on analog neural nets. Neural Computation, 7:1054-1078, 1995. (Gzipped PostScript, 22 p., 82 KB). (PDF, 1517 KB).

[65b]
W. Maass. Agnostic PAC-learning of functions on analog neural nets. In Advances in Neural Information Processing Systems, volume 7, pages 311-318, 1995. (Gzipped PostScript, 266 KB). (PDF, 269 KB).

[64a]
P. Auer, P. M. Long, W. Maass, and G. J. Woeginger. On the complexity of function learning. Machine Learning, 18:187-230, 1995. Invited paper in a special issue of Machine Learning. (PDF, 2394 KB).

[64b]
P. Auer, P. M. Long, W. Maass, and G. J. Woeginger. On the complexity of function learning. In Proceedings of the 5th Annual ACM Conference on Computational Learning Theory, pages 392-401, 1993.

[63]
Z. Chen and W. Maass. On-line learning of rectangles and unions of rectangles. Machine Learning, 17:201-223, 1994. Invited paper for a special issue of Machine Learning. (Gzipped PostScript, 15 p., 220 KB). (PDF, 1301 KB).

[62a]
W. Maass. Bounds for the computational power and learning complexity of analog neural nets. SIAM J. on Computing, 26(3):708-732, 1997. (Gzipped PostScript, 32 p., 173 KB). (PDF, 412 KB).

[62b]
W. Maass. Bounds for the computational power and learning complexity of analog neural nets. In Proceedings of the 25th Annual ACM Symposium on Theory Computing, pages 335-344, 1993. (Gzipped PostScript, 2331 KB). (PDF, 1832 KB).

[61]
W. Maass, G. Schnitger, E. Szemeredi, and G. Turan. Two tapes versus one for off-line Turing machines. Computational Complexity, 3:392-401, 1993. (PDF, 617 KB).

[60]
Z. Chen and W. Maass. A solution of the credit assignment problem in the case of learning rectangles. In Proceedings of the 3rd Int. Workshop on Analogical and Inductive Inference, volume 642 of Lecture Notes in Artificial Intelligence, pages 26-34. Springer, 1992.

[59]
Z. Chen and W. Maass. On-line learning of rectangles. In Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 16-28, 1992. (PDF, 1169 KB).

[58a]
W. Maass, G. Schnitger, and E. Sontag. A comparison of the computational power of sigmoid and boolean threshold circuits. In Theoretical Advances in Neural Computation and Learning, V. P. Roychowdhury, K. Y. Siu, and A. Orlitsky, editors, pages 127-151. Kluwer Academic Publishers (Boston), 1994. (PDF, 305 KB).

[58b]
W. Maass, G. Schnitger, and E. Sontag. On the computational power of sigmoid versus boolean threshold circuits. In Proc. of the 32nd Annual IEEE Symposium on Foundations of Computer Science 1991, pages 767-776, 1991. (PDF, 810 KB).

[57]
W. Maass. On-line learning with an oblivious environment and the power of randomization. In Proceedings of the 4th Annual ACM Workshop on Computational Learning Theory, pages 167-175. Morgan Kaufmann (San Mateo), 1991. (PDF, 786 KB).

[56]
W. Maass and G. Turan. Algorithms and lower bounds for on-line learning of geometrical concepts. Machine Learning, 14:251-269, 1994. (PDF, 1588 KB).

[55a]
W. J. Bultman and W. Maass. Fast identification of geometric objects with membership queries. Information and Computation, 118:48-64, 1995. (PDF, 1276 KB).

[55b]
W. J. Bultman and W. Maass. Fast identification of geometric objects with membership queries. In Proceedings of the 4th Annual ACM Workshop on Computational Learning Theory,, pages 337-353, 1991.

[54]
W. Maass and G. Turan. Lower bound methods and separation results for on-line learning models. Machine Learning, 9:107-145, 1992. Invited paper for a special issue of Machine Learning. (PDF, 1839 KB).

[53]
A. Gupta and W. Maass. A method for the efficient design of Boltzmann machines for classification problems. In Advances in Neural Information Processing Systems, R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, volume 3, pages 825-831. Morgan Kaufmann, (San Mateo), 1991. (PDF, 513 KB).

[52]
W. Maass and T. A. Slaman. Splitting and density for the recursive sets of a fixed time complexity. In Proceedings of a Workshop on Logic from Computer Science, Y. N. Moschovakis, editor, pages 359-372. Springer (Berlin), 1991. (PDF, 814 KB).

[51]
W. Maass and T. A. Slaman. The complexity types of computable sets. Journal of Computer and System Sciences, 44:168-192, 1992. Invited paper for a special issue of the J. Comput. Syst. Sci. (PDF, 1852 KB).

[50]
W. Maass and T. A. Slaman. On the relationship between the complexity, the degree, and the extension of a computable set. In Proceedings of the 1989 Recursion Theory Week Oberwolfach, pages 297-322. Springer (Berlin), 1990. (PDF, 1967 KB).

[49]
W. Maass and G. Turan. How fast can a threshold gate learn. In Computational Learning Theory and Natural Learning System: Constraints and Prospects, S. J. Hanson, G. A. Drastal, and R. L. Rivest, editors, pages 381-414. MIT Press (Cambridge), 1994. (PDF, 2200 KB).

[48]
W. Maass and G. Turan. On the complexity of learning from counterexamples and membership queries. In Proceedings of the 31th Annual IEEE Symposium on Foundations of Computer Science, pages 203-210, 1990. (PDF, 650 KB).

[47]
A. Hajnal, W. Maass, P. Pudlak, M. Szegedy, and G. Turan. Threshold circuits of bounded depth. J. Comput. System Sci., 46:129-154, 1993. (PDF, 1459 KB).

[46]
M. Dietzfelbinger and W. Maass. The complexity of matrix transposition on one-tape off-line Turing machines with output tape. Theoretical Computer Science, 108:271-290, 1993. (PDF, 1496 KB).

[45]
M. Dietzfelbinger, W. Maass, and G. Schnitger. The complexity of matrix transposition on one-tape off-line Turing machines. Theoretical Computer Science, 82:113-129, 1991. (PDF, 1103 KB).

[44]
W. Maass and G. Turán. On the complexity of learning from counterexamples (extended abstract). In Proceedings of the 30th Annual IEEE Symposium on Foundations of Computer Science, pages 262-267, 1989. (PDF, 591 KB).

[43]
W. Maass and T. A. Slaman. Extensional properties of sets of time bounded complexity (extended abstract). In Proceedings of the 7th International Conference on Fundamentals of Computation Theory, volume 380 of Lecture Notes in Computer Science, pages 318-326. Springer (Berlin), 1989. (PDF, 506 KB).

[42]
W. Maass and T. A. Slaman. The complexity types of computable sets (extended abstract). In Proceedings of the 4th Annual Conference on Structure in Complexity Theory, pages 231-239. IEEE Computer Society Press (Washington), 1989. (PDF, 475 KB).

[41]
W. Maass and T. A. Slaman. Some problems and results in the theory of actually computable functions. In Proceedings of the Logic Colloquium '88, Padova, Italy, Ferro, Bonotto, Valentini, and Zanardo, editors, pages 79-89. Elsevier Science Publishers (North-Holland), 1989. (PDF, 533 KB).

[40]
W. Maass and K. Sutner. Motion planning among time dependent obstacles. Acta Informatica, 26:93-122, 1988. (PDF, 2173 KB).

[39]
M. Dietzfelbinger and W. Maass. The complexity of matrix transposition on one-tape off-line Turing machines with output tape. In Proceedings of the 15th International Colloquium on Automata, Languages and Programming, volume 317 of Lecture Notes in Computer Science, pages 188-200. Springer (Berlin), 1988. (PDF, 858 KB).

[38]
A. Hajnal, W. Maass, and G. Turan. On the communication complexity of graph properties. In Proceedings of the 20th Annual ACM Symposium on Theory of Computing, pages 186-191, 1988. (PDF, 553 KB).

[37]
N. Alon and W. Maass. Meanders and their applications in lower bound arguments. J. Comput. System Sci., 37:118-129, 1988. Invited paper for a special issue of J. Comput. System Sci. (PDF, 857 KB).

[36]
M. Dietzfelbinger and W. Maass. Lower bound arguments with ``inaccesible'' numbers. Journal of Computer and System Sciences, 36:313-335, 1988. (PDF, 1384 KB).

[35]
W. Maass. On the use of inaccessible numbers and order indiscernibles in lower bound arguments for random access machines. J. Symbolic Logic, 53:1098-1109, 1988. (PDF, 945 KB).

[34]
A. Hajnal, W. Maass, P. Pudlak, M. Szegedy, and G. Turan. Threshold circuits of bounded depth. Journal of Computer and System Sciences, 46:129-154, 1993. (PDF, 1459 KB).

[33]
W. Maass, G. Schnitger, and E. Szemeredi. Two tapes are better than one for off-line turing machines. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing, pages 94-100, 1987. (PDF, 455 KB).

[32]
D. Hochbaum and W. Maass. Fast approximation algorithms for a nonconvex covering problem. J. Algorithms, 8:305-323, 1987. (PDF, 1000 KB).

[31]
W. Maass and A. Schorr. Speed-up of Turing machines with one work tape and a two-way input tape. SIAM J. Comput., 16:195-202, 1987. (PDF, 690 KB).

[30]
N. Alon and W. Maass. Meanders, ramsey's theorem and lower bounds for branching programs. Proceedings of the 27th Annual IEEE Symposium on Foundations of Computer Science, pages 410-417, 1986. (PDF, 1160 KB).

[29]
M. Dietzfelbinger and W. Maass. Two lower bound arguments with ``inaccessible'' numbers. In Proceedings of the Structure in Complexity Theory Conference, Berkeley 1986, volume 223 of Lecture Notes in Computer Science, pages 163-183. Springer (Berlin), 1986. (PDF, 991 KB).

[28]
W. Maass and G. Schnitger. An optimal lower bound for Turing machines with one work tape and two-way input tape. In Proceedings of the Structure in Complexity Theory Conference, Berkeley 1986, volume 223 of Lecture Notes in Computer Science, pages 249-264. Springer (Berlin), 1986. (PDF, 1033 KB).

[27]
W. Maass. On the complexity of nonconvex covering. SIAM J. Computing, 15:453-467, 1986. (PDF, 1654 KB).

[26]
W. Maass. Are recursion theoretic arguments useful in complexity theory. In Proceedings of the International Conference on Logic, Methodology and Philosphy of Science, Salzburg 1983, pages 141-158. North-Holland (Amsterdam), 1986. (PDF, 1051 KB).

[25]
W. Maass. Combinatorial lower bound arguments for deterministic and nondeterministic Turing machines. Transactions of the American Mathematical Society, 292(2):675-693, 1985. hard copy. (PDF, 2083 KB).

[24]
M. Dietzfelbinger and W. Maass. Strong reducibilities in alpha- and beta-recursion theory. In Proceedings of the 1984 Recursion Theory Week Oberwolfach, Germany, volume 1141 of Lecture Notes in Mathematics, pages 89-120. Springer (Berlin), 1985. (PDF, 1693 KB).

[23]
W. Maass. Major subsets and automorphisms of recursively enumerable sets. Proceedings of Symposia in Pure Mathematics, 42:21-32, 1985. (PDF, 688 KB).

[22]
D. Hochbaum and W. Maass. Approximation schemes for covering and packing problems in image processing and VLSI. J. Assoc. Comp. Mach., 32:130-136, 1985. (PDF, 571 KB).

[21]
W. Maass. Variations on promptly simple sets. J. Symbolic Logic, 50:138-148, 1985. (PDF, 767 KB).

[20]
W. Maass. Quadratic lower bounds for deterministic and nondeterministic one-tape Turing machines. In Proceedings of 16th Annual ACM Symp. on Theory of Computing, pages 401-408, 1984. (PDF, 453 KB).

[19]
D. Hochbaum and W. Maass. Approximation schemes for covering and packing problems in robotics and VLSI (extended abstract). In Proceedings of Symp. on Theoretical Aspects of Computer Science (Paris 1984), volume 166 of Lecture Notes in Computer Science, pages 55-62. Springer (Berlin), 1984. (PDF, 452 KB).

[18]
W. Maass. On the orbits of hyperhypersimple sets. J. Symbolic Logic, 49:51-62, 1984. (PDF, 830 KB).

[17]
S. Homer and W. Maass. Oracle dependent properties of the lattice of NP-sets. Theoretical Computer Science, 24:279-289, 1983. (PDF, 596 KB).

[16]
W. Maass and M. Stob. The intervals of the lattice of recursively enumerable sets determined by major subsets. Ann. of Pure and Applied Logic, 24:189-212, 1983. (PDF, 1319 KB).

[15]
W. Maass. Characterization of recursively enumerable sets with supersets effectively isomorphic to all recursively enumerable sets. Trans. Amer. Math. Soc., 279:311-336, 1983. (PDF, 1564 KB).

[14]
W. Maass. Recursively enumerable generic sets. The Journal of Symbolic Logic, 47:809-823, 1983. (PDF, 814 KB).

[13]
W. Maass. Recursively invariant beta-recursion theory. Ann. of Math. Logic, 21:27-73, 1981. (PDF, 2907 KB).

[12]
W. Maass. A countable basis for sigma-one-two sets and recursion theory on aleph-one. Proceedings Amer. Math. Soc., 82:267-270, 1981. (PDF, 222 KB).

[11]
W. Maass, A. Shore, and M. Stob. Splitting properties and jump classes. Israel J. Math., 39:210-224, 1981. (PDF, 692 KB).

[10]
W. Maass. Recursively invariant beta-recursion theory -- a preliminary survey. In Proceedings of the Conf. on Recursion Theory and Computational Complexity, G. Lolli, editor, pages 229-236. Liguori editore (Napoli), 1981. (PDF, 360 KB).

[9]
W. Maass. On alpha- and beta-recursively enumerable degrees. Ann. of Math. Logic, 16:205-231, 1979. (PDF, 1607 KB).

[8]
W. Maass. High alpha-recursively enumerable degrees. In Generalized Recursion Theory II, E. Fenstad, R. O. Gandy, and G. E. Sacks, editors, pages 239-269. North-Holland (Amsterdam), 1978. (PDF, 1204 KB).

[7]
W. Maass. Contributions to alpha- and beta-recursion theory. Habilitationsschrift, Ludwig-Maximilians-Universitaet Muenchen, 1978. Minerva Publikation (Muenchen). (PDF, 3951 KB).

[6]
W. Maass. Fine structure theory of the constructible universe in alpha- and beta-recursion theory. In Higher Set Theory, G. H. Mueller and D. Scott, editors, volume 669 of Lecture Notes in Mathematics, pages 339-359. Springer (Berlin), 1978. (PDF, 1001 KB).

[5]
W. Maass. The uniform regular set theorem in alpha-recursion theory. J. Symbolic Logic, 43:270-279, 1978. (PDF, 621 KB).

[4]
W. Maass. Inadmissibility, tame r.e. sets and the admissible collapse. Annals of Mathematical Logics, 13:149-170, 1978. (PDF, 1216 KB).

[3]
W. Maass. On minimal pairs and minimal degrees in higher recursion theory. Archive Math. Logik Grundlagen, 18:169-186, 1977. (PDF, 1048 KB).

[2]
W. Maass. Eine Funktionalinterpretation der praedikativen Analysis. Archive Math. Logik Grundlagen, 18:27-46, 1976. (PDF, 907 KB).

[1]
W. Maass. Church rosser theorem fuer lambda-kalkuele mit unendlich langen termen. In Proof Theory Symposium Kiel 1974, J. Diller and G. H. Mueller, editors, volume 500 of Lecture Notes in Mathematics, pages 257-263. Springer (Berlin), 1975. (PDF, 296 KB).