About Me

I’m Alican (pronunciation: ), a Ph.D. student in Computer Science and Engineering Department at University of Michigan Ann Arbor. I’m a member of GEMS Lab where I am advised by Danai Koutra.

Formerly, I was a member of Bilkent Information Retrieval Group (BilIR) and advised by Prof. Fazli Can at Bilkent University, Turkey. My M.Sc. Thesis was on multi-label learning in nonstationary data streams.

Recent News

  • [3 Sep 2019] Started my Ph.D. journey at University of Michigan, Ann Arbor.
  • [9 Aug 2019] One short paper accepted to CIKM 2019.
  • [11 Jul 2019] Defended my M.Sc. Thesis!

Research Interests

  • Graph (Network) Mining: Graphs represent all sorts of interconnected systems such as the world wide web, brains, social networks, protein interactions and so on. In graph mining, we want to extract some information and learn insights about how these interconnected systems behave in real world. Nowadays, I’m working on the following problems:
    • Embeddings in Temporal Graphs is still not a thoroughly explored area. Embeddings that are used in temporal / dynamic graphs are generally generated on the snapshots graphs that are taken over time. However, it is very un-insightful how frequent the snapshots should be generated, which embeddings method should be used, or can one utilize a combination of embeddings from the latest snapshots to reason better about the upcoming snapshots.
    • Multiple Temporal Graph Summarization aims to generate a summary temporal trajectory, from many temporal trajectories. For instance, given many child connectomes over time, can we find the normal brain development for children? What is more, can we predict anomalous cases of brain development beforehand?
  • Data Stream Mining: Data streams are environments where data with high volume arrives continuously. I tackled multiple problems that take place in data streams throughout my Master’s degree years:
    • Multi-label classification is a task where we want to classify instances into a subset of labels rather than one of them, i.e. $y^* \subset \mathcal{L}$. We want to handle this exponentially behaving label space while performing well on the data stream, and adapting the possible changes in the mappings between the data instances and the solution space (called concept drifts).
    • Concept drift detection aims to detect and flag whenever there is a change in the underlying distribution of the data, so that our models can take countermeasures against the drift.
    • Multi-label synthetic data stream generation where we want to realistically model multi-label stream generators using single-label data stream generators. We also want to be able to simulate all kinds of different concept drifts (gradual, sudden, reoccurring and so on).
    • Ensemble pruning where we want to drastically decrease the size of our ensemble while preserving its predictive performance and diversity at the same time.


  • Books. You can look at my Goodreads page for what I’m reading nowadays and what my book taste is like in general. You’ll see some classics, magical realism, history of science, philosophy, scientist biographies, spiritualism and esoterism, scifi/fantasy there.
  • Manga and Anime. See my MyAnimeList profile. I follow One Piece since 2012 (perhaps one of the biggest commitments of my life). Here are some of my favourite series: HunterxHunter, One Punch Man, Fate/Zero, Steins Gate, Code Geass; and movies: Nausicaa, Mononoke Hime, Koe no Katachi, Kimi no na wa, Sakasama no Patema.


GEMS Lab UMich CSE BilIR Bilkent University