OrangeFFT's blog

By OrangeFFT, history, 5 weeks ago, In English

Hey everyone!

I am working on a project where I need to implement some operations efficiently. Unfortunately, I couldn't find any problem similar to this one, so I'll describe it.

The problem

Assume we are given an $$$N \times N$$$ matrix containing only zeroes and ones. I'm interested in a way, possibly a data structure built over the matrix, that can answer queries and perform updates of these types:

  1. Q j: find the number of the first (bottom-most) row that has a $$$1$$$ on the $$$j$$$-th column of the matrix, or report that there are no 1s on that column ($$$1 \le j \le N$$$).
  2. U i: clear the $$$i$$$-th row, i.e. completely fill it with zeroes ($$$1 \le i \le N$$$).

Although I'd also be happy to know a more general way to do so, in case that helps in my particular scenario it is guaranteed that all the queries are given in a non-decreasing order of columns (i.e. when a query is made about the first position of a $$$1$$$ in the $$$j$$$-th column, no query will be made later referring to columns $$$1..j-1$$$, even though multiple consecutive queries can be made for the same column). Also, no updates will try to clear the same row twice.

Example

So, here's an example of a possible sequence of queries/updates over the following $$$4 \times 4$$$ matrix (rows are numbered from $$$1$$$ to $$$4$$$ bottom-up and columns are numbered from $$$1$$$ to $$$4$$$ from left to right):

\begin{array}{|c|c|c|c|} \hline 1 & 1 & 0 & 0 \cr \hline 0 & 1 & 0 & 0 \cr \hline 0 & 0 & 1 & 0 \cr \hline 0 & 1 & 1 & 1 \cr \hline \end{array}

  1. Q 1: the first query asks us to find the bottom-most row containing a $$$1$$$ in the first column, so we report $$$4$$$.
  2. U 4: the fourth row is cleared and so the new updated matrix is this:

\begin{array}{|c|c|c|c|} \hline 0 & 0 & 0 & 0 \cr \hline 0 & 1 & 0 & 0 \cr \hline 0 & 0 & 1 & 0 \cr \hline 0 & 1 & 1 & 1 \cr \hline \end{array}

  1. Q 2: the bottom-most $$$1$$$ on column $$$2$$$ is on row $$$1$$$, so we report $$$1$$$.
  2. U 1: row $$$1$$$ is cleared and the new updated matrix becomes:

\begin{array}{|c|c|c|c|} \hline 0 & 0 & 0 & 0 \cr \hline 0 & 1 & 0 & 0 \cr \hline 0 & 0 & 1 & 0 \cr \hline 0 & 0 & 0 & 0 \cr \hline \end{array}

  1. Q 2: the bottom-most $$$1$$$ on column $$$2$$$ is on row $$$3$$$ now, so we report $$$3$$$.
  2. U 3: row $$$3$$$ gets erased:

\begin{array}{|c|c|c|c|} \hline 0 & 0 & 0 & 0 \cr \hline 0 & 0 & 0 & 0 \cr \hline 0 & 0 & 1 & 0 \cr \hline 0 & 0 & 0 & 0 \cr \hline \end{array}

  1. Q 3: we report $$$2$$$.
  2. U 2: the entire matrix becomes empty:

\begin{array}{|c|c|c|c|} \hline 0 & 0 & 0 & 0 \cr \hline 0 & 0 & 0 & 0 \cr \hline 0 & 0 & 0 & 0 \cr \hline 0 & 0 & 0 & 0 \cr \hline \end{array}

  1. Q 3: since there are no ones in column $$$3$$$, we report that fact.

Discussion

Can anyone think of a data structure that can be built in a time nearly proportional to the matrix size (something like between $$$\mathcal{O}(N^2)$$$ and $$$\mathcal{O}(N^2 \log^2 N)$$$ or so) that can support these operations online in $$$\mathcal{O}(\log N)$$$ or $$$\mathcal{O}(\log^2 N)$$$ per query/update?

I first thought of 2D segment trees/Fenwick trees, but the notion of 1D lazy propagation doesn't seem to extend to higher dimensions (although it is possible to perform range sum queries + range add updates or even support operations like $$$(+,+)$$$, $$$(\times, \times)$$$, $$$(\wedge, \wedge)$$$, $$$(\vee, \vee)$$$ in $$$\mathcal{O}(\log^2 N)$$$ per query/update). I thought of somehow implementing the notion of clearing a row using range sum updates and binary searching the first $$$1$$$ in a column with range sum queries, possibly maintaining multiple data structures at the same time for crossing results about ones and zeroes, but it seems a bit hard to model the problem using only range sums (but perhaps it is possible nonetheless).

Also, as far as I know a quad-tree takes $$$\mathcal{O}(N)$$$ time to clear an entire row (which is precisely the kind of operation required in this case).

Now, even though these operations may seem difficult to implement for the general case, this problem is very specific and has some interesting properties we could possibly take advantage of:

  • an update will simply clear an entire row (it seems just like "removing" it), we don't need to support range updates to arbitrary intervals.
  • given the presented assumptions, the number of updates is guaranteed to be $$$\mathcal{O}(N)$$$ given that no row will get cleared twice. Because of that, after all the updates are done we theoretically will have "visited" only $$$\mathcal{O}(N)$$$ ones in the matrix, one per each row, regardless of the density/sparsity of the original matrix (even if it started as completely filled with ones), so it seems that it may be possible to have something from $$$\mathcal{O}(N)$$$ to $$$\mathcal{O}(N \log^2 N)$$$ time for $$$N$$$ updates in total (instead of $$$\mathcal{O}(N^2)$$$ for $$$N$$$ updates in total).
  • a query will only refer to a specific column (and not to arbitrary subrectangles in the matrix) and, as said, the queries are made in non-decreasing order of columns.
  • we're only dealing with zeroes and ones, not with arbitrary integers, so for instance these queries/updates are all equivalent:

    • queries (enabling a binary search): range sum in column, range OR in column, range max in column, ...
    • updates: set entire row to $$$0$$$, multiply every element in row by $$$0$$$, perform an AND between each element in the row and $$$0$$$, update each element $$$x$$$ in the row to $$$\min(0,x)$$$, ...

Does anyone have an idea on how to solve this online in something close to $$$\mathcal{O}(\log N)$$$ or $$$\mathcal{O}(\log^2 N)$$$ per query/update (possibly amortized)? :)

Thank you!

 
 
 
 
  • Vote: I like it
  • +22
  • Vote: I do not like it

»
5 weeks ago, # |
Rev. 2   Vote: I like it +8 Vote: I do not like it

If amortized is okay you can just simulate. Store the cells grouped by columns in ordered sets. Each update will need to remove at most O(N) and there's only O(N) updates, so total of O(N^2 log(N)) work. Each query will take O(1) to find max (or O(log(n)) depending on whether std::set caches their begin()).

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    What I'm aiming for would be $$$\mathcal{O} (\log N)$$$ per query/update, meaning that if there are $$$\mathcal{O}(N)$$$ updates the total complexity regarding queries and updates should be $$$\mathcal{O}(N \log N)$$$.

    In this sense I am interested in minimizing the query/updates complexity but allowing for previously building a data structure in $$$\mathcal{O}(N^2)$$$ for instance.

    This may make more sense if you assume that we want to quickly answer queries and we're not that interested in taking into account the time for building the data structure (but don't want that time to exceed the order of the matrix size whatsoever). So we're interested in theoretically minimizing the query/updates part and not so much in the data structure initialization.

    (If that helps understanding why, imagine we want to perform the same kind of sequence of queries and updates $$$N$$$ times to the original matrix; in this case, we can just build a persistent data structure once and perform queries and updates in total $$$\mathcal{O}(N^2 \log N)$$$ instead of the $$$\mathcal{O}(N^3 \log N)$$$ we would get using your method)

    Hope it makes sense, thank you!

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +13 Vote: I do not like it

    There is an easier and faster way to apply the same idea. Build a list/stack/etc for every column with the positions with 1s in it. Also keep a boolean array to tell with row have been cleared or not.

    The update is just setting one position of the boolean array. To answer a query, while the bottom-most 1 on the column is on a cleared row, you pop it. It amortizes to O(S) where S is the number of position with 1s in the matrix and the constant seems to be good.

    I don't think it is good enough for OP, though.

    • »
      »
      »
      4 weeks ago, # ^ |
      Rev. 2   Vote: I like it +8 Vote: I do not like it

      When the matrix is dense your idea would take $$$\mathcal{O}(N^2)$$$ time (with what seems a really low constant), but for $$$N$$$ updates (one for each row) we would theoretically only need to actually "visit" $$$N$$$ ones in total (because after a row is "visited" that row gets cleared and won't be reported in a query again). From this fact, it seems reasonable to me that there may be a way that only depends somewhat linearly or linearithmically on $$$N$$$.

      UPD: Okay, I've updated the question to reinforce this. So that it becomes clear, your ideas, although otherwise efficient in a more general problem, do not work in (possibly amortized) $$$\mathcal{O}(\log N)$$$ or $$$\mathcal{O}(\log^2 N)$$$ time per query/update as my question asked and thus are not still that efficient given the setting I've presented. They are good ideas in the general sense nonetheless.

      Thank you though!

»
4 weeks ago, # |
Rev. 2   Vote: I like it +18 Vote: I do not like it

Let's make the problem easier by adding the following restrictions:

  1. For column queries (queries of type Q) tell only whether or not there is a one in the column (and not the exact position of the bottom-most one).
  2. All row removal queries (queries of type U) should happen before all column queries. As a corollary, we may assume that column queries are indeed sorted by their column number.

I claim that even this, easier, problem is not simpler than triangle-finding (which is known to be not much easier than boolean matrix multiplication, see Theorem 1.3 here).

Indeed, consider an undirected graph with adjacency matrix $$$A$$$. Then, build the desired data structure over $$$A$$$. Now, iterate over $$$i$$$ — one of the vertices of the hypothetical triangle. Let $$$N_i$$$ be the set of neighbours of $$$i$$$. Remove all rows except for those with numbers from $$$N_i$$$ from the matrix. Now, for each of columns with numbers from $$$N_i$$$ ask whether it contains a one. If at least one column does (say, $$$j$$$-th column contains a one in $$$k$$$-row, by construction $$$j$$$ and $$$k$$$ lie in the set $$$N_i$$$), then there is a triangle containing $$$i$$$ in the graph (triangle $$$i-j-k-i$$$ to be specific). Otherwise, there is no such triangle. Indeed, this process simply checks whether there are two neighbours in the set $$$N_i$$$ by looking for ones in the corresponding "submatrix" of $$$A$$$.

We can repeat this process for each $$$i$$$. We don't need to rebuild the structure each time, because, as you said, we can make it persistent. Or we can simply keep track of changes to the structure and rollback them in the end. With the latter strategy, a data structure for the above simplified version of the problem with build time $$$P(n)$$$ and $$$Q(n)$$$ total time for answering $$$2n$$$ queries after preprocessing leads to a $$$O(P(n) + nQ(n))$$$ algorithm for triangle-finding.

Therefore, a data structure with $$$P(n) = O(n^2 \cdot \mathrm{polylog}(n))$$$ and $$$Q(n) = O(n \cdot \mathrm{polylog}(n))$$$ seems out of reach, because it would lead to a $$$O(n^2 \cdot \mathrm{polylog}(n))$$$ algorithm for finding a triangle in an undirected graph.

I am curious about the application you had in mind when asking this question, because it seems that it has to be related to triangle-finding and boolean matrix multiplication somehow. Maybe the problem you have in mind is also equivalent to them via subcubic reductions?

  • »
    »
    4 weeks ago, # ^ |
    Rev. 3   Vote: I like it +8 Vote: I do not like it

    Very interesting point and very nice insight!

    My project is indeed directed to investigating boolean matrix multiplication related problems under answering queries to a black box; after a few reductions I eventually stumbled on the subproblem I described, which is really similar to the one you presented. And yes, the goal of using a persistent data structure (or rollbacks) would serve exactly for the purpose you pointed out.

    What's more, I had never found anything about this reasoning in literature, and so when I came up with it it didn't seem for me that the problem was really that hard and I could simply be over-complicating something. However, quite surprisingly, you seem to have quickly drawn some connections. Is this "subproblem" known then (or obvious enough to guess)? I am curious about whether you know any reference that describes an approach similar to this (I did a quick search in the interesting paper you linked to, but could not find anything directly related -- besides, of course, the relation with triangle-finding as you have explained).

    The problem still seems deceptively simple to me, though. It seems to me that a clever use of something not far from range sum queries/updates would help, although it's still not obvious how we would manage the information about cleared rows. Maybe I should be attacking boolean matrix multiplication-related problems with a more serious posture from now on :)

    Thank you very much for your reply! :)