{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Reversed MC RAPTOR\n", "\n", "## Left out at this stage:\n", "\n", "- Footpaths\n", "- Time to get out of one transport and walk to the platform of the next\n", "- Real probabilities\n", "\n", "## Encoding the data structures\n", "### General considerations\n", "We adhere to the data structures proposed by Delling et al. These structures aim to minimize read times in memory by making use of consecutive in-memory adresses. Thus, structures with varying dimensions (e.g dataframes, python lists) are excluded. We illustrate the difficulty with an example. \n", "\n", "Each route has a potentially unique number of stops. Therefore, we cannot store stops in a 2D array of routes by stops, as the number of stops is not the same for each route. We adress this problem by storing stops consecutively by route, and keeping track of the index of the first stop for each route.\n", "\n", "This general strategy is applied to all the required data structures, where possible.\n", "\n", "### routes\n", "The `routes` array will contain arrays `[n_trips, n_stops, pt_1st_stop, pt_1st_trip]` where all four values are `int`. To avoid overcomplicating things and try to mimic pointers in python, `pt_1st_stop` and `pt_1st_trip` contain integer indices." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "lines_to_next_cell": 0 }, "outputs": [], "source": [ "import numpy as np\n", "import pickle\n", "\n", "def pkload(path):\n", " with open(path, 'rb') as f:\n", " obj = pickle.load(f)\n", " return obj" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "lines_to_next_cell": 0 }, "outputs": [ { "data": { "text/plain": [ "array([[140, 3, 0, 0],\n", " [37, 4, 3, 140],\n", " [48, 5, 7, 177],\n", " ...,\n", " [84, 2, 3273, 22625],\n", " [85, 2, 3275, 22709],\n", " [13, 5, 3277, 22794]], dtype=object)" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "routes = pkload(\"../data/routes_array.pkl\")[:, [0,1,3,2]] # Fixing the order -> remove the indexation once the pkl file is fixed\n", "routes[:,[3,2]] = np.concatenate(([[0,0]], routes[:-1,[3,2]]), axis=0)\n", "routes" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "lines_to_next_cell": 0 }, "outputs": [], "source": [ "# routes = np.array([[2, 3, 0, 0], #r0\n", "# [2, 3, 3, 6], #r1\n", "# [2, 2, 6, 12]]) # r2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### routeStops\n", "`routeStops` is an array that contains the ordered lists of stops for each route. `pt_1st_stop` in `routes` is required to get to the first stop of the route. is itself an array that contains the sequence of stops for route $r_i$." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 791, 1036, 1037, ..., 471, 472, 473])" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "routeStops = pkload(\"../data/route_stops_array.pkl\")\n", "routeStops" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "lines_to_next_cell": 0 }, "outputs": [], "source": [ "# routeStops = np.array([0, 1, 2, # A, B, C\n", "# 3, 2, 4, # D, C, E\n", "# 0, 4]) # A, E" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### stopTimes\n", "\n", "The i-th entry in the `stopTimes` array is itself an array which contains the arrival and departure time at a particular stop for a particular trip. `stopTimes` is sorted by routes, and then by trips. We retrieve the index of the first (earliest) trip of the route with the pointer `pt_1st_trip` stored in `routes`. We may use the built-in `numpy` [date and time data structures](https://blog.finxter.com/how-to-work-with-dates-and-times-in-python/). In short, declaring dates and times is done like this: `np.datetime64('YYYY-MM-DDThh:mm')`. Entries with a `NaT` arrival or departure times correspond to beginning and end of trips respectively.\n", "\n", "Note that trips are indexed implicitely in stopTimes, but we decided to change a little bit from the paper and index them according to their parent route instead of giving them an absolute index. It makes things a bit easier when coding the algorithm." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 'NaT', '2020-05-19T08:49:00.000000000'],\n", " ['2020-05-19T08:50:00.000000000', '2020-05-19T08:50:00.000000000'],\n", " ['2020-05-19T08:51:00.000000000', '2020-05-19T08:51:00.000000000'],\n", " ...,\n", " [ 'NaT', '2020-05-19T16:07:00.000000000'],\n", " ['2020-05-19T16:11:00.000000000', '2020-05-19T16:15:00.000000000'],\n", " ['2020-05-19T16:19:00.000000000', 'NaT']],\n", " dtype='datetime64[ns]')" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stopTimes = pkload(\"../data/stop_times_array.pkl\")\n", "stopTimes" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# stopTimes = np.array([\n", "# # r0, t0\n", "# [None, '2020-05-11T08:00'],\n", "# ['2020-05-11T08:25', '2020-05-11T08:30'],\n", "# ['2020-05-11T08:55', None],\n", "\n", "# # ro, t1\n", "# [None, '2020-05-11T08:10'],\n", "# ['2020-05-11T08:35', '2020-05-11T08:40'],\n", "# ['2020-05-11T09:05', None],\n", " \n", "# # r1, t0 \n", "# [None, '2020-05-11T08:00'],\n", "# ['2020-05-11T08:05', '2020-05-11T08:10'],\n", "# ['2020-05-11T08:15', None],\n", "\n", "# # r1, t1\n", "# [None, '2020-05-11T09:00'],\n", "# ['2020-05-11T09:05', '2020-05-11T09:10'],\n", "# ['2020-05-11T09:15', None],\n", " \n", "# #r2, t0\n", "# [None, '2020-05-11T08:20'],\n", "# ['2020-05-11T09:20', None],\n", " \n", "# #r2, t1\n", "# [None, '2020-05-11T08:30'],\n", "# ['2020-05-11T09:30', None]],\n", "# dtype='datetime64')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`NaT` is the `None` equivalent for `numpy datetime64`." ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ True, False])" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.isnat(stopTimes[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### stopRoutes\n", "\n", "`stopRoutes` contains the routes associated with each stop. We need the pointer in `stops` to index `stopRoutes` correctly." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([1460288880641, 180388626432, 481036337153, ..., 317827579907,\n", " 317827579906, 618475290627])" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stopRoutes = pkload(\"../data/stop_routes_array.pkl\").flatten()\n", "stopRoutes" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# stopRoutes = np.array([0, 2, # A\n", "# 0, # B\n", "# 0,1, # C\n", "# 1, # D\n", "# 1, 2]) # E" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We should also build an array for transfer times (including walking times), but for now let's ignore this additional complexity. Finally, the i-th entry in the `stops` array points to the first entry in `stopRoutes` (and `transfers` when that will be tried) associated with stop $p_i$" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[1, 2],\n", " [2, 9],\n", " [3, 16],\n", " ...,\n", " [5387, 10654],\n", " [5389, 10669],\n", " [5398, 10702]], dtype=object)" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stops = pkload(\"../data/stops_array.pkl\")\n", "stops" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(5545,)\n" ] }, { "data": { "text/plain": [ "(1512, 2)" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "print(stopRoutes.shape)\n", "stops.shape" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# # stopRoutes[stops[p][0]:stops[p][1]] returns the routes serving stop p.\n", "# stops = np.array([[0,2], # A\n", "# [2,3], # B\n", "# [3,5], # C\n", "# [5,6], # D\n", "# [6,len(stopRoutes)] # E\n", "# ])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Coding the reversed Multiple Criteria RAPTOR\n", "\n", "Based on modified version of RAPTOR (reversed RAPTOR), we implement a multiple criteria RAPTOR algorithm.\n", "The optimization criteria are:\n", "- Latest departure\n", "- Highest probability of success of the entire trip\n", "- Lowest number of connections (implicit with the round-based approach)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# helper functions\n", "\n", "def arr_and_dep_time(r, t, offset_p):\n", " \"\"\"This function should not be called directly.\n", " Use arrival_time and departure_time.\n", " In particular, this function relies on \"t is not None\"-\n", " \"\"\"\n", " return stopTimes[routes[r][3] # 1st trip of route\n", " + t * routes[r][1] # offset for the right trip\n", " + offset_p # offset for the right stop\n", " ]\n", "\n", "def arrival_time(r, t, offset_p):\n", " \"\"\"Returns 2000 (instead of 0) if t is None.\n", " Otherwise, returns the arrival time of the t-th trip of route r\n", " at the offset_p-th stop of route r.\n", " trips and stops of route r start at t=0, offset_p=0.\n", " \"\"\"\n", " if t is None:\n", " return np.datetime64('2000-01-01T01:00')\n", " \n", " return arr_and_dep_time(r,t,offset_p)[0] # 0 for arrival time\n", "\n", "def departure_time(r, t, offset_p):\n", " \"\"\"Throws TypeError if t is None.\n", " Otherwise, returns the departure time of the t-th trip of route r\n", " at the offset_p-th stop of route r.\n", " trips and stops of route r start at t=0 & offset_p=0.\n", " \"\"\"\n", " if t is None:\n", " raise TypeError(\"Requested departure time of None trip!\")\n", " \n", " return arr_and_dep_time(r,t,offset_p)[1] # 1 for departure time\n", "\n", "def get_stops(r):\n", " \"\"\"Returns the stops of route r\"\"\"\n", " idx_first_stop = routes[r][2]\n", " return routeStops[idx_first_stop:idx_first_stop+routes[r][1]] # n_stops = routes[r][1] " ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "class InstantiationException(Exception):\n", " pass\n", "\n", "class BaseLabel:\n", " \"\"\"An abstract base class for Labels. Do not instantiate.\n", " A label corresponds to a recursive (partial) solution, going\n", " to the target stop from the stop currently under consideration.\n", " \"\"\"\n", " def __init__(self, stop, tau_dep, Pr):\n", " self.stop = stop\n", " self.tau_dep = tau_dep\n", " self.Pr = Pr\n", " \n", " def dominates(self, other):\n", " \"\"\"Returns True if self dominates other, else returns False.\n", " other: another Label instance.\n", " \"\"\"\n", " if self.tau_dep >= other.tau_dep and self.Pr >= other.Pr:\n", " return True\n", " return False\n", " \n", " def print_journey(self):\n", " print(\"Journey begins at stop {stop} at time {tau}, with an \"\n", " \"overall probability of success = {Pr} \\n\".format(\n", " stop = self.stop,\n", " tau = self.tau_dep,\n", " Pr = self.Pr\n", " )\n", " )\n", " self.print_instructions()\n", " \n", " def to_str(self):\n", " s = \"Departure at {0} from stop {1}.\".format(self.tau_dep, self.stop)\n", " return repr(type(self)) + s\n", " \n", " def pprint(self, indent=''):\n", " print(indent, self.to_str())\n", " \n", " def copy(self):\n", " raise InstantiationException(\"class BaseLabel should never \"\n", " \"be instantiated.\"\n", " )\n", "\n", "class ImmutableLabel(BaseLabel):\n", " \"\"\"Base class for immutable Labels\"\"\"\n", " def copy(self):\n", " return self\n", "\n", "class TargetLabel(ImmutableLabel):\n", " \"\"\"A special type of label reserved for the target stop.\"\"\"\n", " def __init__(self, stop, tau_dep):\n", " BaseLabel.__init__(self, stop, tau_dep, 1.)\n", " \n", " def print_instructions(self):\n", " \"\"\"Finish printing instructions for the journey.\"\"\"\n", " print(\"You have arrived at the target stop ({stop}) \"\n", " \"before the target time of {tau}.\".format(\n", " stop=self.stop,\n", " tau=self.tau_dep\n", " ))\n", "\n", "class WalkLabel(ImmutableLabel):\n", " \"\"\"A special type of label for walking connections.\"\"\"\n", " def __init__(self, stop, tau_walk, next_label):\n", " tau_dep = next_label.tau_dep - tau_walk\n", " BaseLabel.__init__(self, stop, tau_dep, next_label.Pr)\n", " self.tau_walk = tau_walk\n", " self.next_label = next_label\n", " \n", " def print_instructions(self):\n", " \"\"\"Recursively print instructions for the whole journey.\"\"\"\n", " print(\"Walk {tau} minutes from stop {p1} to stop {p2}\"\n", " \".\".format(\n", " tau = self.tau_walk,\n", " p1 = self.stop,\n", " p2 = self.next_label.stop\n", " ))\n", " self.next_label.print_instructions()\n", "\n", "class RouteLabel(BaseLabel):\n", " \"\"\"A type of label for regular transports.\"\"\"\n", " def __init__(self,\n", " stop,\n", " tau_dep,\n", " r,\n", " t,\n", " next_label,\n", " Pr_connection_success):\n", " \n", " Pr = Pr_connection_success * next_label.Pr\n", " BaseLabel.__init__(self, stop, tau_dep, Pr)\n", " \n", " self.r = r\n", " self.t = t\n", " self.next_label = next_label\n", " self.route_stops = get_stops(r)\n", " self.offset_p = np.where(self.route_stops == stop)[0][0]\n", " # Store Pr_connection_success for self.copy()\n", " self.Pr_connection_success = Pr_connection_success\n", " \n", " def update_stop(self, stop):\n", " self.stop = stop\n", " self.offset_p = self.offset_p - 1\n", " # Sanity check:\n", " assert self.route_stops[self.offset_p] == stop\n", " self.tau_dep = departure_time(self.r, self.t, self.offset_p)\n", " \n", " def print_instructions(self):\n", " \"\"\"Recursively print instructions for the whole journey.\"\"\"\n", " print(\" \"*4 + \"At stop {stop}, take route {r} at time \"\n", " \"{tau}.\".format(stop=self.stop,\n", " r=self.r,\n", " tau=self.tau_dep\n", " )\n", " )\n", " tau_arr = arrival_time(\n", " self.r,\n", " self.t,\n", " np.where(self.route_stops == self.next_label.stop)\n", " )\n", " print(\" \"*4 + \"Get out at stop {stop} at time {tau}\"\n", " \".\".format(stop=self.next_label.stop, tau=tau_arr)\n", " )\n", " self.next_label.print_instructions()\n", " \n", " def copy(self):\n", " \"\"\"When RouteLabels are merged into the bag of a stop,\n", " they must be copied (because they will subsequently\n", " be changed with self.update_stop()).\n", " \"\"\"\n", " return RouteLabel(self.stop,\n", " self.tau_dep,\n", " self.r,\n", " self.t,\n", " self.next_label,\n", " self.Pr_connection_success\n", " )" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "p_s = 0 # start stop = A\n", "p_t = 4 # target stop = E\n", "tau_0 = np.datetime64('2020-05-11T09:30') # arrival time 09:30\n", "Pr_min = 0.9\n", "Pr_threshold = Pr_min**(0.1)" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[[], [], [], [], [<__main__.TargetLabel at 0x7f2e69e0e090>]]]" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# initialization\n", "n_stops = stops.shape[0]\n", "\n", "# Initialize empty bags for each stop for round 0:\n", "bags = [\n", " [\n", " [] # an empty bag\n", " for _ in range(n_stops)] # one empty bag per stop\n", "]\n", "\n", "marked = [p_t]\n", "bags[0][p_t].append(TargetLabel(p_t, tau_0))\n", "bags" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "# bag operations\n", "def update_bag(bag, label, k):\n", " \"\"\"Add label to bag and remove dominated labels.\n", " bag is altered in-place.\n", " \n", " k: Round number, used for target pruning.\n", " \n", " returns: Boolean indicating whether bag was altered.\n", " \"\"\"\n", " # Apply the Pr_min constraint to label:\n", " if label.Pr < Pr_min:\n", " return False\n", " \n", " # Prune label if it is dominated by bags[k][p_s]:\n", " for L_star in bags[k][p_s]:\n", " if L_star.dominates(label):\n", " return False\n", " \n", " # Otherwise, merge label into bag1\n", " changed = False\n", " for L_old in bag:\n", " if L_old.dominates(label):\n", " return changed\n", " if label.dominates(L_old):\n", " bag.remove(L_old)\n", " changed = True\n", " bag.append(label.copy())\n", " return True\n", "\n", "def merge_bags(bag1, bag2, k):\n", " \"\"\"Merge bag2 into bag1 in-place.\n", " k: Round number, used for target pruning.\n", " returns: Boolean indicating whether bag was altered.\n", " \"\"\"\n", " changed = False\n", " for label in bag2:\n", " changed = changed or update_bag(bag1, label, k)\n", " return changed" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "******************************STARTING round k=1******************************\n", "Marked stops at the start of the round: [4]\n", "Queue before traversing each route: [(1, 4), (2, 4)]\n", "\n", "****TRAVERSING ROUTE r=1 from stop p=4****\n", "\n", "\n", " p_i: 4\n", "\n", " ----scanning arrival times for route r=1 at stop p_i=4----\n", " Explored connection from\n", " Departure at NaT from stop 4.\n", " to\n", " Departure at 2020-05-11T09:30 from stop 4.\n", "\n", "\n", " p_i: 2\n", "\n", " ----scanning arrival times for route r=1 at stop p_i=2----\n", "\n", "\n", " p_i: 3\n", "\n", " ----scanning arrival times for route r=1 at stop p_i=3----\n", "\n", "****TRAVERSING ROUTE r=2 from stop p=4****\n", "\n", "\n", " p_i: 4\n", "\n", " ----scanning arrival times for route r=2 at stop p_i=4----\n", " Explored connection from\n", " Departure at NaT from stop 4.\n", " to\n", " Departure at 2020-05-11T09:30 from stop 4.\n", "\n", "\n", " p_i: 0\n", "\n", " ----scanning arrival times for route r=2 at stop p_i=0----\n", "\n", "******************************STARTING round k=2******************************\n", "Marked stops at the start of the round: [2, 3, 0]\n", "Queue before traversing each route: [(0, 2), (1, 2), (2, 0)]\n", "\n", "****TRAVERSING ROUTE r=0 from stop p=2****\n", "\n", "\n", " p_i: 2\n", "\n", " ----scanning arrival times for route r=0 at stop p_i=2----\n", " Explored connection from\n", " Departure at NaT from stop 2.\n", " to\n", " Departure at 2020-05-11T09:10 from stop 2.\n", "\n", "\n", " p_i: 1\n", "\n", " ----scanning arrival times for route r=0 at stop p_i=1----\n", "\n", "\n", " p_i: 0\n", "\n", " ----scanning arrival times for route r=0 at stop p_i=0----\n", "\n", "****TRAVERSING ROUTE r=1 from stop p=2****\n", "\n", "\n", " p_i: 2\n", "\n", " ----scanning arrival times for route r=1 at stop p_i=2----\n", " Explored connection from\n", " Departure at 2020-05-11T09:10 from stop 2.\n", " to\n", " Departure at 2020-05-11T09:10 from stop 2.\n", "\n", "\n", " p_i: 3\n", "\n", " ----scanning arrival times for route r=1 at stop p_i=3----\n", "\n", "****TRAVERSING ROUTE r=2 from stop p=0****\n", "\n", "\n", " p_i: 0\n", "\n", " ----scanning arrival times for route r=2 at stop p_i=0----\n", "\n", "******************************STARTING round k=3******************************\n", "Marked stops at the start of the round: [1]\n", "Queue before traversing each route: [(0, 1)]\n", "\n", "****TRAVERSING ROUTE r=0 from stop p=1****\n", "\n", "\n", " p_i: 1\n", "\n", " ----scanning arrival times for route r=0 at stop p_i=1----\n", " Explored connection from\n", " Departure at 2020-05-11T08:40 from stop 1.\n", " to\n", " Departure at 2020-05-11T08:40 from stop 1.\n", "\n", "\n", " p_i: 0\n", "\n", " ----scanning arrival times for route r=0 at stop p_i=0----\n", "\n", "*************** THE END ***************\n", "Equilibrium reached. The end.\n" ] } ], "source": [ "# main loop\n", "indent= ' '*4\n", "\n", "k = 0\n", "while True:\n", " k += 1 # k=1 at fist round, as it should.\n", " \n", " # Instead of using best bags, carry over the bags from last round.\n", "# if len(bags <= k):\n", " \n", " bags.append(bags[-1].copy())\n", " \n", " print('\\n******************************STARTING round k={}******************************'.format(k))\n", " # accumulate routes serving marked stops from previous rounds\n", " q = []\n", " print('Marked stops at the start of the round: {}'.format(marked))\n", " for p in marked:\n", " for r in stopRoutes[stops[p][0]:stops[p][1]]: # foreach route r serving p\n", " append_r_p = True\n", " for idx, (rPrime, pPrime) in enumerate(q): # is there already a stop from the same route in q ?\n", " if rPrime == r:\n", " append_r_p = False\n", " p_pos_in_r = np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p)\n", " pPrime_pos_in_r = np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == pPrime)\n", " if p_pos_in_r > pPrime_pos_in_r:\n", " q[idx] = (r, p) # substituting (rPrime, pPrime) by (r, p)\n", " if append_r_p:\n", " q.append((r, p))\n", " \n", " marked = [] # unmarking all stops\n", " \n", " print('Queue before traversing each route: {}'.format(q))\n", " # traverse each route\n", " for (r, p) in q:\n", " print('\\n****TRAVERSING ROUTE r={0} from stop p={1}****'.format(r, p))\n", " B_route = [] # new empty route bag\n", " \n", " # we traverse the route backwards (starting at p, not from the end of the route)\n", " stops_of_current_route = get_stops(r)\n", " offset_p = np.where(stops_of_current_route == p)[0][0]\n", " for offset_p_i in range(offset_p, -1, -1):\n", " p_i = stops_of_current_route[offset_p_i]\n", " print('\\n\\n'+indent+\"p_i: {}\".format(p_i))\n", " \n", " # Update the labels of the route bag:\n", " for L in B_route:\n", " L.update_stop(p_i)\n", " \n", " # Merge B_route into bags[k][p_i]\n", " if merge_bags(bags[k][p_i], B_route, k):\n", " marked.append(p_i)\n", " \n", " # Can we step out of a later trip at p_i ?\n", " # This is only possible if we already know a way to get from p_i to p_t in < k vehicles\n", " # (i.e., if there is at least one label in bags[k][p_i])\n", " for L_k in bags[k][p_i]:\n", " # Note that k starts at 1 and bags[0][p_t] contains a TargetLabel.\n", " print('\\n'+indent+'----scanning arrival times for route r={0} at stop p_i={1}----'.format(r, p_i))\n", " \n", " # We check the trips from latest to earliest\n", " for t in range(routes[r][0]-1, -1, -1): # n_trips = routes[r][0]\n", " # Does t_r arrive early enough for us to make the rest \n", " # of the journey from here (tau[k-1][p_i])?\n", " if arrival_time(r, t, offset_p_i) <= L_k.tau_dep:\n", " \n", " Pr_connection = t / (routes[r][0]-1) # This is a placeholder.\n", " L_new = RouteLabel(p_i,\n", " departure_time(r, t, offset_p_i),\n", " r,\n", " t,\n", " L_k,\n", " Pr_connection\n", " )\n", " if update_bag(B_route, L_new, k):\n", " print(indent+\"Explored connection from\")\n", " L_new.pprint(indent*2)\n", " print(indent+\"to\")\n", " L_k.pprint(indent*2)\n", " \n", " # We don't want to add a label for every trip that's earlier than tau_dep.\n", " # Instead, we stop once we've found a trip that's safe enough.\n", " if Pr_connection > Pr_threshold:\n", " break\n", " \n", " # stopping criteria\n", " if not marked:\n", " print(\"\\n\" + \"*\"*15 + \" THE END \" + \"*\"*15)\n", " print(\"Equilibrium reached. The end.\")\n", " break\n", " if k>2:\n", " if bags[k-1][p_s]:\n", " print(\"\\n\" + \"*\"*15 + \" THE END \" + \"*\"*15)\n", " print(\"There is a solution with {0} connections. We shall not \"\n", " \"search for solutions with {1} or more connections\"\n", " \".\".format(k-2, k)\n", " )\n", " break" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", " ---------- OPTION 0\n", "Journey begins at stop 0 at time 2020-05-11T08:30, with an overall probability of success = 1.0 \n", "\n", " At stop 0, take route 2 at time 2020-05-11T08:30.\n", " Get out at stop 4 at time [['2020-05-11T09:30' 'NaT']].\n", "You have arrived at the target stop (4) before the target time of 2020-05-11T09:30.\n" ] } ], "source": [ "for i, label in enumerate(bags[k][p_s]):\n", " print('\\n'*2,'-'*10, 'OPTION', i)\n", " label.print_journey()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Code for prototyping and debugging:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([99., 80., 47., 22., 15., 11., 3., 0., 1., 1.]),\n", " array([1.0, 6.9, 12.8, 18.700000000000003, 24.6, 30.5, 36.400000000000006,\n", " 42.300000000000004, 48.2, 54.1, 60.0], dtype=object),\n", " )" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD4CAYAAAAXUaZHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAN3klEQVR4nO3df6jd9X3H8edrps5qtyaaS8gS3c0wVGTMH1ysYimd2YbVUv1DxFJGKIH8Yze7Ftq4wWT/KYxaB0MIapuBWJ3tlqCjrUstY38s7Y3aGpM6MxtrJJor03Xrxtqs7/1xvsJddqP3nO89npwPzwdczvl+vt9zvu83+fq6Xz/nfL83VYUkqS2/NOkCJEkrz3CXpAYZ7pLUIMNdkhpkuEtSg1ZNugCAtWvX1uzs7KTLkKSpsn///teramapdadFuM/OzjI/Pz/pMiRpqiR56VTrnJaRpAYZ7pLUIMNdkhr0juGe5IEkx5McWDR2bpInkrzQPa7pxpPkL5IcTvKDJJePs3hJ0tKWc+b+FeDak8Z2AHurajOwt1sG+CiwufvZDty7MmVKkobxjuFeVf8A/OtJwzcAu7rnu4AbF43/VQ38E7A6yfqVKlaStDyjzrmvq6pj3fNXgXXd8w3Ay4u2O9qNSZLeRb0/UK3BPYOHvm9wku1J5pPMLyws9C1DkrTIqOH+2lvTLd3j8W78FeD8Rdtt7Mb+n6raWVVzVTU3M7PkBVaSpBGNeoXqHmArcGf3uHvR+KeTfBX4IPBvi6ZvxmJ2x+PjfPu3deTO6ye2b0l6O+8Y7kkeAj4CrE1yFLiDQag/kmQb8BJwc7f53wHXAYeB/wQ+NYaaJUnv4B3Dvao+cYpVW5bYtoBb+xYlSerHK1QlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBhnuktQgw12SGmS4S1KDDHdJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBvcI9yR8leS7JgSQPJTkryaYk+5IcTvJwkjNXqlhJ0vKMHO5JNgB/CMxV1W8CZwC3AHcBd1fVhcAbwLaVKFSStHx9p2VWAe9Nsgo4GzgGXAM82q3fBdzYcx+SpCGtGvWFVfVKkj8Hfgz8F/AtYD/wZlWd6DY7CmxY6vVJtgPbAS644IJRy5io2R2PT2S/R+68fiL7lTQ9+kzLrAFuADYBvwacA1y73NdX1c6qmququZmZmVHLkCQtoc+0zO8AP6qqhar6OfB14GpgdTdNA7AReKVnjZKkIfUJ9x8DVyY5O0mALcBB4Engpm6brcDufiVKkoY1crhX1T4GH5w+BTzbvddO4AvAZ5McBs4D7l+BOiVJQxj5A1WAqroDuOOk4ReBK/q8rySpH69QlaQGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBhnuktQgw12SGmS4S1KDDHdJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBvUK9ySrkzya5IdJDiW5Ksm5SZ5I8kL3uGalipUkLU/fM/d7gG9U1UXAJcAhYAewt6o2A3u7ZUnSu2jkcE/yfuDDwP0AVfWzqnoTuAHY1W22C7ixb5GSpOH0OXPfBCwAX07ydJL7kpwDrKuqY902rwLrlnpxku1J5pPMLyws9ChDknSyPuG+CrgcuLeqLgN+yklTMFVVQC314qraWVVzVTU3MzPTowxJ0sn6hPtR4GhV7euWH2UQ9q8lWQ/QPR7vV6IkaVgjh3tVvQq8nOQD3dAW4CCwB9jajW0FdveqUJI0tFU9X/8HwINJzgReBD7F4BfGI0m2AS8BN/fchyRpSL3CvaqeAeaWWLWlz/tKkvrxClVJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBq2adAEa3uyOxye27yN3Xj+xfUtaPs/cJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBvUO9yRnJHk6yWPd8qYk+5IcTvJwkjP7lylJGsZKnLnfBhxatHwXcHdVXQi8AWxbgX1IkobQK9yTbASuB+7rlgNcAzzabbILuLHPPiRJw+t75v4l4PPAL7rl84A3q+pEt3wU2LDUC5NsTzKfZH5hYaFnGZKkxUYO9yQfA45X1f5RXl9VO6tqrqrmZmZmRi1DkrSEPn+s42rg40muA84CfhW4B1idZFV39r4ReKV/mZKkYYx85l5Vt1fVxqqaBW4Bvl1VnwSeBG7qNtsK7O5dpSRpKOP4nvsXgM8mOcxgDv7+MexDkvQ2VuRvqFbVd4DvdM9fBK5YifeVJI3GK1QlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBhnuktQgw12SGmS4S1KDDHdJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDVo16QI0XWZ3PD6R/R658/qJ7FeaViOfuSc5P8mTSQ4meS7Jbd34uUmeSPJC97hm5cqVJC1Hn2mZE8Dnqupi4Erg1iQXAzuAvVW1GdjbLUuS3kUjh3tVHauqp7rn/w4cAjYANwC7us12ATf2LVKSNJwV+UA1ySxwGbAPWFdVx7pVrwLrTvGa7Unmk8wvLCysRBmSpE7vcE/yPuBrwGeq6ieL11VVAbXU66pqZ1XNVdXczMxM3zIkSYv0Cvck72EQ7A9W1de74deSrO/WrweO9ytRkjSsPt+WCXA/cKiqvrho1R5ga/d8K7B79PIkSaPo8z33q4HfB55N8kw39sfAncAjSbYBLwE39ytRkjSskcO9qv4RyClWbxn1fSVJ/Xn7AUlqkOEuSQ0y3CWpQYa7JDXIu0JqKng3Smk4nrlLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBhnuktQgw12SGuRFTNLbmNTFU+AFVOrHM3dJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoP8Yx3SaWpSfyjEPxLSBs/cJalBhrskNWgs4Z7k2iTPJzmcZMc49iFJOrUVn3NPcgbwl8DvAkeB7yXZU1UHV3pfktoyyT9IPinj+oxjHGfuVwCHq+rFqvoZ8FXghjHsR5J0CuP4tswG4OVFy0eBD568UZLtwPZu8T+SPL+M914LvN67wtNHS/201Au01c9QveSuMVayMlr6tyF39ern10+1YmJfhayqncDOYV6TZL6q5sZU0ruupX5a6gXa6qelXsB+lmsc0zKvAOcvWt7YjUmS3iXjCPfvAZuTbEpyJnALsGcM+5EkncKKT8tU1Ykknwa+CZwBPFBVz63Q2w81jTMFWuqnpV6grX5a6gXsZ1lSVeN4X0nSBHmFqiQ1yHCXpAZNTbhP+y0NkjyQ5HiSA4vGzk3yRJIXusc1k6xxuZKcn+TJJAeTPJfktm586vpJclaS7yb5ftfLn3Xjm5Ls6463h7svB0yNJGckeTrJY93y1PaT5EiSZ5M8k2S+G5u6Yw0gyeokjyb5YZJDSa4aVy9TEe6LbmnwUeBi4BNJLp5sVUP7CnDtSWM7gL1VtRnY2y1PgxPA56rqYuBK4Nbu32Ma+/lv4JqqugS4FLg2yZXAXcDdVXUh8AawbYI1juI24NCi5Wnv57er6tJF3wefxmMN4B7gG1V1EXAJg3+j8fRSVaf9D3AV8M1Fy7cDt0+6rhH6mAUOLFp+HljfPV8PPD/pGkfsazeDewlNdT/A2cBTDK6ofh1Y1Y3/n+PvdP9hcG3JXuAa4DEgU97PEWDtSWNTd6wB7wd+RPdFlnH3MhVn7ix9S4MNE6plJa2rqmPd81eBdZMsZhRJZoHLgH1MaT/dFMYzwHHgCeBfgDer6kS3ybQdb18CPg/8ols+j+nup4BvJdnf3bYEpvNY2wQsAF/upszuS3IOY+plWsK9eTX4tT1V30tN8j7ga8Bnquoni9dNUz9V9T9VdSmDM94rgIsmXNLIknwMOF5V+yddywr6UFVdzmBa9tYkH168coqOtVXA5cC9VXUZ8FNOmoJZyV6mJdxbvaXBa0nWA3SPxydcz7IleQ+DYH+wqr7eDU9tPwBV9SbwJINpi9VJ3rrIb5qOt6uBjyc5wuCOrNcwmOed1n6oqle6x+PA3zD4BTyNx9pR4GhV7euWH2UQ9mPpZVrCvdVbGuwBtnbPtzKYuz7tJQlwP3Coqr64aNXU9ZNkJsnq7vl7GXx2cIhByN/UbTYVvQBU1e1VtbGqZhn8d/LtqvokU9pPknOS/Mpbz4HfAw4whcdaVb0KvJzkA93QFuAg4+pl0h8yDPFhxHXAPzOYD/2TSdczQv0PAceAnzP4Db6NwVzoXuAF4O+Bcydd5zJ7+RCD/3X8AfBM93PdNPYD/BbwdNfLAeBPu/HfAL4LHAb+GvjlSdc6Qm8fAR6b5n66ur/f/Tz31n/703isdXVfCsx3x9vfAmvG1Yu3H5CkBk3LtIwkaQiGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWrQ/wJ3eeD1vPXiQgAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "import matplotlib.pyplot as plt\n", "# Plot distribution of n_stops\n", "plt.hist(routes[:,1])" ] }, { "cell_type": "code", "execution_count": 95, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[0]" ] }, "execution_count": 95, "metadata": {}, "output_type": "execute_result" } ], "source": [ "list(range(0,-1,-1))" ] }, { "cell_type": "code", "execution_count": 49, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "None\n" ] }, { "data": { "text/plain": [ "[0, 1, 3, 4, 5]" ] }, "execution_count": 49, "metadata": {}, "output_type": "execute_result" } ], "source": [ "l = list(range(6))\n", "ret = l.remove(2)\n", "print(ret)\n", "l" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Departure at 2020-05-11T08:00 from stop 0.\n", " Departure at 2020-05-11T08:10 from stop 0.\n" ] } ], "source": [ "B = [RouteLabel(1,1,0,0,TargetLabel(p_t, tau_0),0.8), RouteLabel(1,1,0,1,TargetLabel(p_t, tau_0),1)]\n", "B[0].update_stop(0)\n", "B[1].update_stop(0)\n", "for l in B:\n", " l.pprint()" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Departure at 2020-05-11T08:20 from stop 0.\n", "True\n", " Departure at 2020-05-11T08:10 from stop 0.\n", " Departure at 2020-05-11T08:20 from stop 555.\n", "----------\n", " Departure at 2020-05-11T08:20 from stop 555.\n" ] } ], "source": [ "label = RouteLabel(4,0, 2, 0, TargetLabel(p_t, tau_0), 0.9)\n", "label.update_stop(0)\n", "label.pprint()\n", "print(update_bag(B, label, 0))\n", "label.stop = 555\n", "for l in B:\n", " l.pprint()\n", "print('-'*10)\n", "label.pprint()" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[1, 2, 3], [1, 2, 666]]" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bags = [[1,2,3]]\n", "bags.append(bags[-1].copy())\n", "bags[1][2] = 666\n", "bags" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p_s = 0 # start stop = A\n", "p_t = 4 # target stop = E\n", "tau_0 = np.datetime64('2020-05-11T08:05') # departure time 08:05\n", "k_max = 10 # we set a maximum number of transports to pre-allocate memory for the numpy array tau_i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# initialization\n", "n_stops = len(stops)\n", "\n", "# earliest arrival time at each stop for each round.\n", "tau = np.full(shape=(k_max, n_stops), fill_value = np.datetime64('2100-01-01T00:00')) # 2100 instead of infinity # number of stops * max number of transports\n", "\n", "# earliest arrival time at each stop, indep. of round\n", "tau_star = np.full(shape=n_stops, fill_value = np.datetime64('2100-01-01T00:00'))\n", "\n", "marked = [p_s]\n", "q = []\n", "tau[0, p_s] = tau_0" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)[0][0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p_i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "t_r_dep = stopTimes[routes[r][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)[0][0] + \\\n", " routes[r][1]*t_r][1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 2) <\\\n", "np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 3):\n", " print(\"hello\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[routes[1][2] + np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 2)[0][0]:routes[1][2]+routes[1][1]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[routes[1][2] + np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 2)[0][0]:6]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[routes[1][2]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 2)[0][0]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if True and \\\n", " True:\n", " print(\"hello\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tau[0][0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "stopTimes[3][1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a = np.arange(1, 10)\n", "a" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a[1:10:2]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stopTimes[routes[0][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[0][2]:routes[0][2]+routes[0][1]] == 0)[0][0]:\\\n", " # end of the trips of r\n", " routes[0][3]+routes[0][0]*routes[0][1]:\\\n", " # we can jump from the number of stops in r to find the next departure of route r at p_i\n", " routes[0][1]\n", " ]\n", "# we may more simply loop through all trips, and stop as soon as the departure time is after the arrival time\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stopTimes[routes[0][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[0][2]:routes[0][2]+routes[0][1]] == 0)[0][0]][1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stopTimes[routes[1][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 3)[0][0] + \\\n", " routes[1][1]*1][1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# t_r is a trip that belongs to route r. t_r can take value 0 to routes[r][0]-1\n", "t = None\n", "r = 1\n", "tau_k_1 = tau[0][0]\n", "p_i = 3\n", "\n", "t_r = 0\n", "while True:\n", " \n", " t_r_dep = stopTimes[routes[r][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)[0][0] + \\\n", " routes[r][1]*t_r][1]\n", " \n", " if t_r_dep > tau_k_1:\n", " # retrieving the index of the departure time of the trip in stopTimes\n", " #t = routes[r][3] + t_r * routes[r][1]\n", " t = t_r\n", " break\n", " t_r += 1\n", " # we could not hop on any trip at this stop\n", " if t_r == routes[r][0]:\n", " break\n", " \n", "print(\"done\")\n", "print(t)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r = 1\n", "t = 1\n", "p_i = 2\n", "# 1st trip of route + offset for the right trip + offset for the right stop\n", "stopTimes[routes[r][3] + t * routes[r][1] + np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "d = []\n", "not d" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "r = 1\n", "t = 0\n", "p_i = 4\n", "arr_t_p_i = stopTimes[routes[r][3] + \\\n", " t * routes[r][1] + \\\n", " np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)[0][0]][0]\n", "arr_t_p_i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.datetime64('NaT') > np.datetime64('2100-01-01')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.datetime64('NaT') < np.datetime64('2100-01-01')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "jupytext": { "formats": "ipynb,md,py:percent" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 4 }