Wikipedia only lists two problems under "unsolved problems in computer science":
What are other major problems that should be added to this list?
Rules:
- Only one problem per answer
- Provide a brief description and any relevant links
Wikipedia only lists two problems under "unsolved problems in computer science":
What are other major problems that should be added to this list?
Rules:
Can multiplication of $n$ by $n$ matrices be done in $O(n^2)$ operations?
The exponent of the best known upper bound even has a special symbol, $\omega$. Currently $\omega$ is approximately 2.376, by the Coppersmith-Winograd algorithm. A nice overview of the state of the art is Sara Robinson, Toward an Optimal Algorithm for Matrix Multiplication, SIAM News, 38(9), 2005.
Update: Andrew Stothers (in his 2010 thesis) showed that $\omega < 2.3737$, which was improved by Virginia Vassilevska Williams (in a July 2014 preprint) to $\omega < 2.372873$. These bounds were both obtained by a careful analysis of the basic Coppersmith-Winograd technique.
Update (Jan 30, 2014): François Le Gall has proved that $\omega < 2.3728639$ in a paper published in ISSAC 2014 (arXiv preprint).
Update (Nov 4, 2023): "New Bounds for Matrix Multiplication: from Alpha to Omega" proved that $\omega \leq 2.371552$.
Is Graph Isomorphism in P?
The complexity of Graph Isomorphism (GI) has been an open question for several decades. Stephen Cook mentioned it in his 1971 paper on NP-completeness of SAT.
Determining whether two graphs are isomorphic can usually be done quickly, for instance by software such as nauty and saucy. On the other hand, Miyazaki constructed classes of instances for which nauty provably requires exponential time.
Read and Corneil reviewed the many attempts to tackle the complexity of GI up to that point: The Graph Isomorphism Disease, Journal of Graph Theory 1, 339–363, 1977.
GI is not known to be in co-NP, but there is a simple randomized protocol for Graph Non-Isomorphism (GNI). So GI (= co-GNI) is therefore believed to be "close to" NP ${}\cap{}$ co-NP.
On the other hand, if GI is NP-complete, then the Polynomial Hierarchy collapses. So GI is unlikely to be NP-complete. (Boppana, Håstad, Zachos, Does co-NP Have Short Interactive Proofs?, IPL 25, 127–132, 1987)
Shiva Kintali has a nice discussion of the complexity of GI at his blog.
Laszlo Babai proved that Graph Isomorphism is in quasipolynomial time.
Is Factoring in $\mathsf{P}$?
Is there a pivoting rule for the simplex algorithm that yields worst-case polynomial running time? More generally, is there any strongly polynomial algorithm for linear programming?
The exponential-time hypothesis (ETH) asserts that solving SAT requires exponential, 2Ω(n) time. ETH implies many things, for instance that SAT is not in P, so ETH implies P ≠ NP. See Impagliazzo, Paturi, Zane, Which Problems Have Strongly Exponential Complexity?, JCSS 63, 512–530, 2001.
ETH is widely believed, but likely to be difficult to prove, as it implies many other complexity class separations.
Immerman and Vardi show that fixed-point logic captures PTIME on the class of ordered structures. One of the biggest open problems in descriptive complexity theory is whether the dependency on the order can be removed:
Is there a logic that captures PTIME?
Put simply, a logic capturing PTIME is a programming language for graph problems that works directly on the graph structure and does not have access to the encoding of the vertices and edges, such that the following hold:
If there is no logic that captures PTIME, then $P \neq NP$ since NP is captured by existential second-order logic. A logic capturing PTIME would provide a possible attack to P vs NP.
See Lipton's blog for an informal discussion and M. Grohe: The Quest for a Logic Capturing PTIME (LICS 2008) for a more technical survey.
Is the unique games conjecture true?
And: Given that there are sub-exponential time approximation algorithms for Unique Games, where does the problem ultimately rest in terms of the complexity landscape?
Permanent versus Determinant
The permanent versus determinant question is interesting because of two facts. First, the permanent of a matrix counts the number of perfect matchings in a bipartite graph. Therefore the permanent of such a matrix is #P-Complete. At the same time, the definition of the permanent is very close that of the determinant, ultimately different only because of a simple sign change. Determinant calculations are well known to be in P. Studying the different between the permanent and the determinant, and how many determinant calculations are required to compute the permanent speak about P versus #P.
The dynamic optimality conjecture for splay trees.
Or more generally: Is any online dynamic binary search tree O(1)-competitive?
Can we compute the FFT in much less than $O(n \log n)$ time?
In the same (very) general vein, there are many questions of improving the run-times of many classical problems or algorithms: e.g., can all-pairs-shortest-paths (APSP) be solved in $O(n^{3-\epsilon})$ time ?
Edit: APSP runs in time $(\frac{n^3}{2^{\Omega(log n)^{1/2}}})$ "where additions and comparisons of reals are unit cost (but all other operations have typical logarithmic cost)": http://arxiv.org/pdf/1312.6680v2.pdf
A linear time deterministic algorithm for the minimum spanning tree problem.
NP versus co-NP
The NP versus co-NP question is interesting because NP ≠ co-NP implies P ≠ NP (as P is closed under complement). It also relates to "duality": separation between finding/verifying examples and finding/verifying counterexamples. In fact, proving that a question is in both NP and co-NP is our first good evidence that a problem that seems to be outside of P is also likely not NP-Complete.
Do all propositional tautologies have polynomial-size Frege proofs?
Arguably the major open problem of proof complexity: demonstrate super-polynomial size lower bounds on propositional proofs (called also Frege proofs).
Informally, a Frege proof system is just a standard propositional proof system for proving propositional tautologies (one learns in a basic logic course), having axioms and deduction rules, where proof-lines are written as formulas. The size of a Frege proof is the number of symbols it takes to write down the proof.
The problem then asks whether there is a family $(F_n)_{n=1}^\infty$ of propositional tautological formulas for which there is no polynomial $ p $ such that the minimal Frege proof size of $ F_n $ is at most $ p(|F_n|)$, for all $ n=1,2,\ldots$ (where $ |F_n| $ denotes the size of the formula $ F_n $).
Formal definition of a Frege proof system
Definition (Frege rule) A Frege rule is a sequence of propositional formulas $ A_0(\overline x),\ldots,A_k(\overline x) $, for $ k \le 0 $, written as $ \frac{A_1(\overline x), \ldots,A_k(\overline x)}{A_0(\overline x)}$. In case $ k = 0 $, the Frege rule is called an axiom scheme. A formula $ F_0 $ is said to be derived by the rule from $ F_1,\ldots,F_k $ if $ F_0,\ldots,F_k $ are all substitution instances of $ A_1,\ldots,A_k $, for some assignment to the $ \overline x $ variables (that is, there are formulas $B_1,\ldots,B_n $ such that $F_i = A_i(B_1/x_1,\ldots,B_n/x_n), $ for all $ i=0,\ldots,k $. The Frege rule is said to be sound if whenever an assignment satisfies the formulas in the upper side $A_1,\ldots,A_k $, then it also satisfies the formula in the lower side $ A_0 $.
Definition (Frege proof) Given a set of Frege rules, a Frege proof is a sequence of formulas such that every proof-line is either an axiom or was derived by one of the given Frege rules from previous proof-lines. If the sequence terminates with the formula $ A $, then the proof is said to be a proof of $ A $. The size of a Frege proof is the the total sizes of all the formulas in the proof.
A proof system is said to be implicationally complete if for all set of formulas $ T $, if $ T $ semantically implies $ F $, then there is a proof of $ F $ using (possibly) axioms from $ T $. A proof system is said to be sound if it admits proofs of only tautologies (when not using auxiliary axioms, like in the $ T $ above).
Definition (Frege proof system) Given a propositional language and a finite set $ P $ of sound Frege rules, we say that $ P $ is a Frege proof system if $ P $ is implicationally complete.
Note that a Frege proof is always sound since the Frege rules are assumed to be sound. We do not need to work with a specific Frege proof system, since a basic result in proof complexity states that every two Frege proof systems, even over different languages, are polynomially equivalent [Reckhow, PhD thesis, University of Toronto, 1976].
Establishing lower bounds on Frege proofs could be viewed as a step towards proving $NP \neq coNP$, since if this is true then no propositional proof system (including Frege) can have polynomial size proofs for all tautologies.
Are there problems that cannot be solved efficiently by parallel computers?
Problems that are P-complete are not known to be parallelizable. P-complete problems include Horn-SAT and Linear Programming. But proving that this is the case would require separating some notion of parallelizable problems (such as NC or LOGCFL) from P.
Computer processor designs are increasing the number of processing units, in the hope that this will yield improved performance. If fundamental algorithms such as Linear Programming are inherently not parallelizable, then there are significant consequences.
Are there truly subquadratic-time algorithms (meaning $O(n^{2-\delta})$ time for some constant $\delta>0$) for 3SUM-hard Problems?
In 2014, Grønlund and Pettie described a deterministic algorithm for 3SUM itself that runs in time $O(n^2/(\log n/\log \log n)^{2/3})$. Although this is a major result, the improvement over $O(n^2)$ is only (sub)logarithmic. Moreover, no similar subquadratic algorithms are known for most other 3SUM-hard problems.
Andrej Dubrovsky, Oksana Scegulnaja-Dubrovska Improved Quantum Lower Bounds for 3-Sum Problem. Proceedings of Baltic DB&IS 2004, vol. 2, Riga, Latvia, pp.40-45.
– Martin Schwarz Aug 25 '10 at 20:34Can we compute the edit distance between two strings of length $n$ in sub-quadratic time, i.e., in time $O(n^{2-\epsilon})$ for some $\epsilon>0$ ?
BQP = P?
Also: NP contained in BQP?
I know this violated the rules by having two questions in the answer, but when taken with the P vs NP question, they are not necessarily independent questions.
and, a little further away from the mainstream:
(Informally, if you have all problems in EXP on a table, and you pick one up uniformly at random, what is the probability that the problem you chose is also in NP? This question has been formalized by the notion of resource-bounded measure. It is known that P has measure zero within EXP, i.e., the problem you picked up from the table is almost surely not in P.)
What is the approximability of Metric TSP? Christofides' Algorithm from 1975 is a polynomial-time (3/2)-approximation algorithm. Is it NP-hard to do better?
Approximating Metric TSP to within a factor smaller than 220/219 is NP-hard (Papadimitriou and Vempala, 2006 [PS]). To my knowledge this is the best known lower bound.
There is some evidence suggesting that the actual bound may be 4/3 (Carr and Vempala, 2004 [Free version] [Good version]).
The upper bound on approximability was recently lowered to $13/9$ (Mucha 2011 "13/9 -approximation for Graphic TSP" [PDF])
Shannon proved in 1949 that if you pick a Boolean function at random, it has exponential circuit complexity with probability almost one.
The best lower bound for an explicit Boolean function $f:\{0,1\}^n \to \{0,1\}$ we have so far is $5n - o(n)$ by K. Iwama, O. Lachish, H. Morizumi, and R. Raz.
What is the query complexity of testing triangle-freeness in dense graphs (i.e., distinguishing triangle-free graphs from those $\epsilon$-far from being triangle-free)? The known upper bound is a tower of exponentials in $1/\epsilon$, while the known lower bound is only mildly superpolynomial in $1/\epsilon$. This is a pretty basic question in extremal graph theory/additive combinatorics that has been open for nearly 30 years.
Separate NEXP from BPP. People tend to believe BPP=P, but no one can separate NEXP from BPP.
Derandomization of the Polynomial Identity Testing problem
The problem is the following: Given an arithmetic circuit computing a polynomial $P$, is $P$ identically zero?
This problem can be solved in randomized polynomial time but is not known to be solvable in deterministic polynomial time.
Related is Shub and Smale's $\tau$ conjecture. Given a polynomial $P$, we define its $\tau$-complexity $\tau(P)$ as the size of the smallest arithmetic circuit computing $P$ using the sole constant $1$. For a univariate polynomial $P\in\mathbb Z[x]$, let $z(P)$ be its number of real roots.
Prove that there exists a universal constant $c$ such that for every $P\in\mathbb Z[x]$, $z(P)\le (1+\tau(P))^c$.
The area of parameterized complexity has its own load of open problems.
Consider the decision problems
Many, MANY, combinatorial problems exist in this form. Parameterized complexity consider an algorithm to be "efficient" if its running time is upper bounded by $f(k)n^c$ where $f$ is an arbitrary function and $c$ is a constant independent of $k$. In comparison notice that all such problems can be easily solved in $n^{O(k)}$.
This framework models the cases in which we are looking for a small combinatorial structure and we can afford exponential run-time with respect to the size of the solution/witness.
A problem with such an algorithm (e.g. vertex cover) is called Fixed Parameter Tractable (FPT).
Parameterized complexity is a mature theory and has both strong theoretical foundations and appeal for practical applications. Decision problems interesting for such theory form a very well structured hierarchy of classes with natural complete problems:
$$ FPT \subseteq W[1] \subseteq W[2] \subseteq \ldots \subseteq W[i] \subseteq W[i+1] \subseteq \ldots W[P] $$
Of course it is open if any of such inclusion is strict or not. Notice that if $FPT=W[1]$ then SAT has subexponential algorithm (this is non trivial). Last statement connects prameterized complexity with $ETH$ mentioned above.
Also notice that investigating such collapses is not an empty exercise: proving that $W[1]=FPT$ is equivalent to prove that there is a fixed parameter tractable algorithm for finding $k$-cliques.
I know the OP asked for only one problem per post, but the RTA (Rewriting Techniques and their Applications) 1 and TLCA (Typed Lambda Calculi and their Applications) conferences both maintain lists of open problems in their fields 2. These lists are quite useful, as they also include pointers to previous work done on attempting to solve these problems.
Is the discrete logarithm problem in P?
Let $G$ be a cyclic group of order $q$ and $g,h \in G$ such that $g$ is a generator of $G$. The problem of finding $n \in \mathbb{N}$ such that $g^n = h$ is known as the discrete logarithm problem (DLP). Is there a (classical) algorithm for solving the DLP in worst-case polynomial-time in the number of bits of $q$?
There are variations of DLP which are believed to be easier, but are still unsolved. The computational Diffie-Hellman problem (CDH) asks for finding $g^{a b}$ given $g, g^a$ and $g^b$. The decisional Diffie-Hellman problem (DDH) asks for deciding, given $g, g^a, g^b, h \in G$, if $g^{a b} = h$.
Clearly DLP is hard if CDH is hard, and CDH is hard if DDH is hard, but no converse reductions are known, except for some groups. The assumption that DDH is hard is key to the security of some cryptosystems, such as ElGamal and Cramer-Shoup.
Is there a Quantum PCP theorem?
There are a lot of open problems in lambda calculi (typed and untyped). See the TLCA list of open problems for details; there is also a nice PDF version without the frames.
I particularly like problem #5:
Are there terms untypable in $F_ω$ but typable with help of positive recursive types?
Parity games are two-player infinite-duration graph games, whose natural decision problem is in NP and co-NP, and whose natural search problem in PPAD and PLS.
http://en.wikipedia.org/wiki/Parity_game
Can parity games be solved in polynomial time?
(More generally, a long-standing major open question in mathematical programming is whether P-matrix Linear Complementarity Problems can be solved in polynomial time?)
Sensitivity versus block sensitivity
Boolean sensitivity is interesting because block sensitivity, a close relative, is polynomially related to several other important and interesting complexity measures (like the certificate complexity of a boolean function). If sensitivity is always related to block sensitivity in a polynomial way, we have an extremely simple characteristic of boolean function that's related to so many others.
One might read Rubinstein's "Sensitivity vs. block sensitivity of Boolean functions" or Kenyon and Kutin's "Sensitivity, block sensitivity, and l-block sensitivity of boolean functions."
Are NP-completeness in the sense of Cook and NP-completeness in the sense of Karp different concepts, assuming P $\neq$ NP?
Does there exist any hypothesis class that is NP-Hard to (improperly) PAC learn?
This has some possible implications for complexity, and I think the best progress on this question is here: http://www.cs.princeton.edu/~dxiao/docs/ABX08.pdf
What about proving BPP is contained in NP? (Unconditionally; we already know that BPP=P assuming pretty reasonable complexity assumptions)
The "P vs NP" question extends naturally to polynomial-time hierarchy (PH): "Whether PH has infinite levels, or it collapses to some finite level?"
I think this question is (or should be considered as) the most intriguing question of the computer science: If PH has infinite levels, then $\mathbf{P} \neq \mathbf{NP}$. In addition, several researchers have shown that if Graph Isomorphism is NP-complete, then PH collapses to the 2nd level. Therefore, if PH has infinite levels, then Graph Isomorphism is provably not NP-complete.
Several other results follow from the infiniteness of the levels of PH.
Is there an algorithm to compute the generalized star-height of a given regular language?
See http://en.wikipedia.org/wiki/Generalized_star_height_problem
Generalized regular expressions are defined like regular expressions, but they allow the complement operator. The generalized star height (gsh) of a regular language is the minimum nesting depth of Kleene stars needed to represent the language by a generalized regular expression. Regular languages of gsh 0 (also known as star-free languages) have two nice characterizations: Schützenberger gave an algebraic characterization (their syntactic monoid is aperiodic) and McNaughton showed they correspond to FO[<].
It follows that there are languages of gsh $1$, like $(aa)^*$, but no language of gsh $> 1$ is known! Thus a subproblem would be first to find such a language, or to prove that all regular languages have gsh 1. See also http://www.liafa.univ-paris-diderot.fr/~jep/Problemes/starheight.html
The word "major" is a bit frightening and takes us to the P/NP and related questions. Among the almost-major problems which might be feasible, one that I like is the question of randomized decision trees for graph properties. Is it true that for every non-trivial monotone graph property for graphs with n vertices the expected number of queries that you need to ask in order to know if the graph satisfy the property is constant n^2.
This conjecture is known as the Aanderaa-Karp-Rosenberg conjecture.
Is there a best algorithm for integer multiplication and matrix multiplication (MM), or for that matter any other familiar problem? Manuel Blum has suggested these are good candidates not to have a best algorithm. Among bilinear identities such as Strassen's there is no best one according to Coppersmith and Winograd (1982). If the conjectures of Umans et al are correct, then there is no best algorithm of the type they study. For relevant articles Google "Speedup for Natural Problems".
K-server conjecture and randomized K-server conjecture.
Definition according to wikipedia
An online algorithm must control the movement of a set of k servers, represented as points in a metric space, and handle requests that are also in the form of points in the space. As each request arrives, the algorithm must determine which server to move to the requested point. The goal of the algorithm is to keep the total distance all servers move small, relative to the total distance the servers could have moved by an optimal adversary who knows in advance the entire sequence of requests.
Another open problem for the lambda calculus (from TLCA list of open problems; PDF version ).
Problem #22 on the list:
Is there a continuously complete CPO model of the $\lambda$-calculus whose theory is precisely λβη or λβ ?
Proving the existence of hard-on-average problems in NP using the P≠NP assumption.
Bogdanov and Trevisan, Average-Case Complexity, Foundations and Trends in Theoretical Computer Science Vol. 2, No 1 (2006) 1–106
Unconditional derandomization of Arthur-Merlin games. It is also known that under hardness assumptions AM = NP. The question is, can we prove unconditionally that AM is a subset of sigma sub 2: http://cs.haifa.ac.il/~ronen/online_papers/online_papers.html
Open Problem Garden hosts a number of unsolved problems in theoretical computer science.
Proving that BPP is in NP is harder than separating NEXP from BPP
That BPP is in NP implies that the polynomial identity problem is in NP, which separates NEXP from P/Poly (BPP is in P/Poly).
It seems strange to me that almost all the answers are about computational complexity, while the question asks for problems in all computer science.
To counter-balance a little bit:
Decidability of the dot-depth hierarchy: Given a first-order formula on finite words and an integer $k$, is there an equivalent first-order formula with only $k$ quantifier alternations?
Recent progress has been made, it has been showed decidable for $k=2$ in a 2014 paper by Thomas Place and Marc Zeitoun, but the general problem is still wide open.
Can we multiply two arbitrary $n$-bit numbers in $O(n)$ time? There is a trivial lower bound of $\Omega(n)$, but no better lower bound is known. Currently, the asymptotically fastest algorithm is $O(n \log n)$ - a recent breakthrough by Harvey and van der Hoeven 2019. The previous best took time $O(n \log n 2^{\log^* n})$ due to Martin Fürer, building off of the original algorithm by Schönhage–Strassen which ran in $\Theta(n \log n \log \log n)$ time. Regan and Lipton showed that a super-linear lower bound would follow from the Hartmanis-Stearns Real-Time Computability Conjecture.
A recent result of Afshani, Freksen, Kamma, and Larsen established the following conditional lower bound: assuming the network coding conjecture, any constant-degree Boolean circuit for multiplication must have size $\Omega(n\log n)$.
Algebraic dichotomy conjecture (Bulatov, Jeavons and Krokhin): Assuming ETH, every constraint satisfaction problem is either in $P$ or requires $2^{ \Omega(n)}$ time.
Is Quasi-Polynomial Time in PSPACE?
Finding natural SampNP-complete distributional problems.
Informally, Samp-NP is the class of NP problems restricted to distributions that are samplable in polynomial time ("On the Theory of Average Case Complexity", Ben-David, Chor, Goldreich and Luby, JCSS 1992, doi:10.1016/0022-0000(92)90019-F). This class aims to capture the complexity of solving NP on real life instances. While it is known that this class has complete problems, we do not know of any natural complete problems for this class. Finding such a problem would yield the first natural problem for which we have good theoretical reasons to believe that it is hard on average.
Some open problems in complexity theory lower bounds, together with their relationships, are mapped here.
If P != NP, Does the polynomial hierarchy collapse?
(Because if P = NP then it completely collapses, of course)
Getting an O(1) factor approximation algorithm in polytime for the Maximum Independent Set of Rectangles.
This is one of the biggest open problems in Computational Geometry. Recently, Anna Adamaszek and Andreas Wiese [1] have given a QPTAS for this problem, which shows the existence of a PTAS assuming standard complexity theory conjectures. However even a constant factor approximation is not known yet that can be achieved in polytime. The best known polytime approximation factor is $O(\log \log n)$ [2]. More recently, Abed et al. [3] have given a constant factor approximation based on a conjecture.
[1] http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6686176
[2] http://ttic.uchicago.edu/~cjulia/papers/rectangles-SODA.pdf
[3] http://drops.dagstuhl.de/opus/volltexte/2015/5291/pdf/1.pdf
Conjunctive query containment over bag semantics
In a GoogleFight between two search queries, can one tell if the first query always wins, without looking at the data?
This 1993 question from database theory [1] asks whether it is possible to decide if an SQL query (more precisely, a conjunctive query) always yields at least as many answers as another conjunctive query, over all possible databases. It would be nice to answer this question to help with query optimization.
One can also formalise the question without referring to databases (see also [2]):
Homomorphism Domination
Input: finite relational structures $S$ and $S'$.
Question: is it true that for any finite relational structure $T$, there are at least as many relational structure homomorphisms from $S$ to $T$ as there are from $S'$ to $T$?
Since the quantification is over an infinite set of structures $T$, this may be undecidable. It is known to be NP-hard [3] as a special case of a more general question over positive semirings (the non-negative integers with addition and multiplication is a semiring); $\Pi^P_2$-hardness was claimed two decades ago but remains unclear. If instead of conjunctive queries, slightly more general queries are allowed, then the problem does become undecidable, via reductions from Hilbert's 10th problem [4,5].
What makes this question interesting is that for most positive semirings of interest the general question is decidable, and actually in $\Pi_2^P$. In fact, for the Boolean semiring case the question becomes: given two conjunctive queries, is it always true that when the first query has an answer then so does the second? Ashok Chandra and Philip Merlin showed in 1977 [6] that this is equivalent to checking whether there exists a homomorphism between the queries, which is in NP. Moreover, in typical databases the queries are usually small or even fixed, while the data is large and changes frequently. This means that even brute force search for a homomorphism between the input queries may be worthwhile.
So it might be a good idea to look quite closely at two small fixed conjunctive queries to decide which is the better one to use. Yet we don't know if such queries can be compared based on the number of answers they generate.
Edit: added some key references as requested by Sylvain.
In general, what is the relationship between time and space complexity classes?
There are many unresolved questions such as:
Is $PTIME = NLOGSPACE$?
Is $PTIME = DLOGSPACE$?
Is $PTIME = PSPACE$?
Is $DSPACE(s(n)) \subseteq PTIME$ for some $s(n) = \omega(\log n)$?
Is $DTIME(t(n)) \subseteq PSPACE$ for some super polynomial function $t(n)$?
Also, there are several more specific questions coming out of classic research papers that were never fully answered:
Is $NSPACE(k \log(n)) \subseteq DTIME(n^{k - \varepsilon})$ for any $k$ and $\varepsilon > 0$? (see conjecture from Kasai and Iwata 1985)
Is $DTIME(n) \subseteq DTISP(poly(n), \frac{n}{\log n})$? (simulation from Hopcroft, Paul, Valiant 1977 only seems to work with super polynomial time complexity)
Is $NSPACE(\log(n)) \subseteq DTISP(poly(n), \log^2 n)$? (simulation from Savitch's theorem only seems to work with super polynomial time complexity)
Can inductive-recursive types be constructed in univalent models of type theory?
Recently, Hugunin (2019) showed that inductive-inductive types are compatible with univalent models of type theory, and demonstrated how to construct them in cubical type theory. The case for inductive-recursive types, by far, is still unclear.
Some more open problems in type theory here.