This research aims to provide an effective thesis management system for academic department and will be implemented in Computer Technology as pilot department. The objective of the Thesis Coordinator System is to provide a prototype for executing and coordinating thesis activities of CT department from enlisting students to their specific thesis courses, towards assigning them a hassle-free defense schedule that would benefit both advisers and panels.
The prototype includes a search engine and archiving process that enable the students to find a suitable title for their thesis topics. The goal of the project is in particular to focus on the energy production via renewables, such as photovoltaic panels. Prerequisites: Familiarity with stochastic processes and formal verification, whereas no specific knowledge of smart grids is needed.
The abstraction procedure runs in MATLAB and leverages sparse representations, fast manipulations based on vector calculus, and optimized data structures such as Difference-Bound Matrices. We are interested in working with existing commercial simulation software that is targeted around the modelling and analysis of physical, multi-domain systems. It further encompasses the integration with related software tools, as well as the interfacing with devices and the generation of code. We are interested in enriching the software with formal verification features, envisioning the extension of the tool towards capabilities that might enable the user to raise formal assertions or guarantees on models properties, or to synthesise correct-by-design architectures.
The student will be engaged in developing algorithmic solutions towards this goal, while reframing them within a formal and general approach.
The project is inter-disciplinary in dealing with hybrid models involving digital and physical quantities, and in connecting the use of formal verification techniques from the computer sciences with more classical analytical tools from control engineering. Prerequisites: Knowledge of basic formal verification. This project will explore connections of techniques from machine learning with successful approaches from formal verification.
On the other hand, a more practical project will apply the above theoretical connections on a simple models setup in the area of robotics and autonomy. This project shall investigate a rich research line, recently pursued by a few within the Department of CS, looking at the development of quantitative abstractions of Markovian models.
100+ IT Thesis and Capstone Project Source code and Documentation
Abstractions come in the form of lumped, aggregated models, which are beneficial in being easier to simulate or to analyse. Key to the novelty of this work, the proposed abstractions are quantitative in that precise error bounds with the original model can be established. As such, whatever can be shown over the abstract model, can be as well formally discussed over the original one. This project, grounded on existing literature, will pursue depending on the student's interests extensions of this recent work, or its implementation as a software tool.
A reward function is then assigned to the states of the product automaton, according to accepting conditions of the automaton. Additionally, we show that the RL procedure sets up an online value iteration method to calculate the maximum probability of satisfying the given property, at any given state of the MDP. We evaluate the performance of the algorithm on numerous numerical examples.
java source code
This project will provide extensions of these novel and recent results. Stochastic hybrid systems SHS are dynamical models for the interaction of continuous and discrete states. The probabilistic evolution of continuous and discrete parts of the system are coupled, which makes analysis and verification of such systems compelling. Among specifications of SHS, probabilistic invariance and reach-avoid have received quite some attention recently. Numerical methods have been developed to compute these two specifications. These methods are mainly based on the state space partitioning and abstraction of SHS by Markov chains, which are optimal in the sense of reduction in abstraction error with minimum number of Markov states.
The goal of the project is to combine codes have been developed for these methods. The student should also design a nice user interface for the choice of dynamical equations, parameters, and methods, etc. Contextuality is a fundamental feature of quantum physical theories and one that distinguishes it from classical mechanics. In a recent paper by Abramsky and Brandenburger, the categorical notion of sheaves has been used to formalize contextuality. This has resulted in generalizing and extending contextuality to other theories which share some structural properties with quantum mechanics.
A consequence of this type of modeling is a succinct logical axiomatization of properties such as non-local correlations and as a result of classical no go theorems such as Bell and Kochen-Soecker. Like quantum mechanics, natural language has contextual features; these have been the subject of much study in distributional models of meaning, originated in the work of Firth and later advanced by Schutze. These models are based on vector spaces over the semiring of positive reals with an inner product operation.
- introduction pour dissertation.
- gcse chemistry coursework plan;
- essays describing yourself;
- Navigation menu.
- your Online Programming Lessons and Tutorials.
- essay on standards based education;
- Minor research thesis (45cp, duration – two semesters).
The vectors represent meanings of words, based on the contexts in which they often appear, and the inner product measures degrees of word synonymy. Recent work in our group has developed a compositional distributional model of meaning in natural language, which lifts vector space meaning to phrases and sentences.
This has already led to some very promising experimental results. However, this approach does not deal so well with the logical words. The goal of this project is to use sheaf theoretic models to provide both a contextual and logical semantics for natural language. We believe that sheaves provide a generalization of the logical Montague semantics of natural language which did very well in modeling logical connectives, but did not account for contextuality.
The project will also aim to combine these ideas with those of the distributional approach, leading to an approach which combines the advantages of Montague-style and vector-space semantics. The interested student should have taken the category theory and computational linguistics courses, or be familiar with the contents of these.
Student Project and Thesis Topics (2018/12222)
Let F1 and F2 be sentences in first-order logic, say such that F1 entails F2: that is, any model of F1 is also a model of F2. The goal in this project is to explore and implement procedures for constructing interpolants, particularly for certain decidable fragments of first-order logic. It turns out that finding interpolants like this has applications in some database query rewriting problems. This project will look at how to find the best plan for a query, given a collection of data sources with access restrictions.
We will look at logic-based methods for analyzing query plans, taking into account integrity constraints that may exist on the data. Boolean gates like AND can then be described by chemical reactions over those species. These reactions can in turn be implemented by DNA molecules and physically executed. Networks of such Boolean gates can function as controllers for molecular-scale devices, including devices we may want to insert into living organisms.
We want to investigate, by mathematical analysis, modelchecking, and simulation, the noise behavior of these logical gates, due to both noisy inputs and to the intrinsic molecular fluctuations generated by chemical reactions. How we can we compute reliably in such a regime, and how can we design logic gates that are resistant to noise?
The categorical compositional distributional model of meaning is an emerging field of Computational Linguistics and natural language processing. It has been pioneered in the Quantum and Computational Linguistic groups of the department. The general theoretical underpinnings of the model is based on compact closed categories, inspired by the categorical quantum models of Abramsky and Coecke.
In order to apply the model to main stream tasks, one has to instantiate it on concrete linguistic models, and in particular on distributional vector space models of meaning. So far, these instantiations have been done on datasets with small sentences containing few language units such as verbs, adjectives, and relative pronouns, all separately.
In this project, we aim to connect these single experiments and perform a unified task. This is a term-classification task, the goal of which is to successfully classify a number of dictionary terms to their definitions or vice-versa. Specifically, the project involves the following:.
We seek candidates who know Computational Linguistics and Linear Algebra, as well as programming.
Student Jobs & Thesis Topics - Silicon Radar GmbH
A knowledge of category theory is encouraged, but not required. The project will be co-supervised by Dimitri Kartsaklis, who has performed the previous term-classification tasks, Mehrnoosh Sadrzadeh and Bob Coecke. The outcome may be considered for publication in a peer-reviewed conference or journal. Is there a way to define a category whose objects are root systems with additional structure , so that the construction of Lie algebras from them is functorial?
If so, what are its properties - e. The compositional distributional model of meaning Coecke et al. In practice, word meanings are often modelled using finite-dimensional vector spaces, built via word co-occurrence in text corpora. This representation has some similarities but also key differences with the compositional distributional model of meaning.
The project would examine a few key conceptual spaces such as 'colour', 'taste', or 'shape', for example, and investigate how these can be incorporated within the compositional distributional model of meaning. This might be undertaken by giving a category-theoretic formalisation of these spaces, and identifying a suitable grammatical structure to implement composition, or alternatively, by analysing the behaviour of nouns, adjectives, and verbs within conceptual spaces, and fitting these to the pregroup grammar. References : Coecke, B. Sadrzadeh, and S.