{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Reversed MC RAPTOR\n", "\n", "## Left out at this stage:\n", "\n", "- Realistic time to get out of one transport and walk to the platform of the next. Instead, we just set it to 2 minutes, no matter what.\n", "\n", "## Encoding the data structures\n", "### General considerations\n", "We adhere to the data structures proposed by Delling et al. These structures aim to minimize read times in memory by making use of consecutive in-memory adresses. Thus, structures with varying dimensions (e.g dataframes, python lists) are excluded. We illustrate the difficulty with an example. \n", "\n", "Each route has a potentially unique number of stops. Therefore, we cannot store stops in a 2D array of routes by stops, as the number of stops is not the same for each route. We adress this problem by storing stops consecutively by route, and keeping track of the index of the first stop for each route.\n", "\n", "This general strategy is applied to all the required data structures, where possible.\n", "\n", "### routes\n", "The `routes` array will contain arrays `[n_trips, n_stops, pt_1st_stop, pt_1st_trip]` where all four values are `int`. To avoid overcomplicating things and try to mimic pointers in python, `pt_1st_stop` and `pt_1st_trip` contain integer indices." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "lines_to_next_cell": 0 }, "outputs": [], "source": [ "import numpy as np\n", "import pickle\n", "\n", "def pkload(path):\n", " with open(path, 'rb') as f:\n", " obj = pickle.load(f)\n", " return obj" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "lines_to_next_cell": 0 }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(1461, 4)\n" ] }, { "data": { "text/plain": [ "array([[ 1, 26, 0, 0],\n", " [ 1, 8, 26, 26],\n", " [ 1, 17, 34, 34],\n", " ...,\n", " [ 1, 3, 15362, 260396],\n", " [ 2, 16, 15365, 260399],\n", " [ 1, 28, 15381, 260431]], dtype=uint32)" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "routes = pkload(\"../data/routes_array_cyril.pkl\").astype(np.uint32)\n", "print(routes.shape)\n", "routes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### routeStops\n", "`routeStops` is an array that contains the ordered lists of stops for each route. `pt_1st_stop` in `routes` is required to get to the first stop of the route. is itself an array that contains the sequence of stops for route $r_i$." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(15409,)\n", "1406\n" ] }, { "data": { "text/plain": [ "array([1221, 816, 776, ..., 1349, 1037, 552], dtype=uint16)" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "routeStops = pkload(\"../data/route_stops_array_cyril.pkl\").astype(np.uint16)\n", "print(routeStops.shape)\n", "print(routeStops.max())\n", "routeStops" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### stopTimes\n", "\n", "The i-th entry in the `stopTimes` array is itself an array which contains the arrival and departure time at a particular stop for a particular trip. `stopTimes` is sorted by routes, and then by trips. We retrieve the index of the first (earliest) trip of the route with the pointer `pt_1st_trip` stored in `routes`. We may use the built-in `numpy` [date and time data structures](https://blog.finxter.com/how-to-work-with-dates-and-times-in-python/). In short, declaring dates and times is done like this: `np.datetime64('YYYY-MM-DDThh:mm')`. Entries with a `NaT` arrival or departure times correspond to beginning and end of trips respectively.\n", "\n", "Note that trips are indexed implicitely in stopTimes, but we decided to change a little bit from the paper and index them according to their parent route instead of giving them an absolute index. It makes things a bit easier when coding the algorithm." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(260459, 2)\n" ] }, { "data": { "text/plain": [ "array([[ 'NaT', '2020-05-24T07:00:00.000000000'],\n", " ['2020-05-24T07:01:00.000000000', '2020-05-24T07:01:00.000000000'],\n", " ['2020-05-24T07:02:00.000000000', '2020-05-24T07:02:00.000000000'],\n", " ...,\n", " ['2020-05-24T07:35:00.000000000', '2020-05-24T07:35:00.000000000'],\n", " ['2020-05-24T07:36:00.000000000', '2020-05-24T07:36:00.000000000'],\n", " ['2020-05-24T07:37:00.000000000', 'NaT']],\n", " dtype='datetime64[ns]')" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stopTimes = pkload(\"../data/stop_times_array_cyril.pkl\")\n", "print(stopTimes.shape)\n", "stopTimes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`NaT` is the `None` equivalent for `numpy datetime64`." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ True False]\n" ] } ], "source": [ "print(np.isnat(stopTimes[0]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### stopRoutes\n", "\n", "`stopRoutes` contains the routes (as `int`s representing an index in `routes`) associated with each stop. We need the pointer in `stops` to index `stopRoutes` correctly." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(15344,)\n" ] }, { "data": { "text/plain": [ "array([ 17, 116, 126, ..., 861, 982, 1087], dtype=uint32)" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stopRoutes = pkload(\"../data/stop_routes_array_cyril.pkl\").flatten().astype(np.uint32)\n", "print(stopRoutes.shape)\n", "stopRoutes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## transfers\n", "`transfers` is a 2D `np.ndarray` where each entry `[p_j, time]` represents (in seconds) the time it takes to walk from stop p_j to the implicitely given stop p_i.\n", "p_i is given implicitely by the indexing, in conjunction with `stops`. In other words:\n", "`transfers[stops[p_i][2]:stops[p_i][3]]` returns all the footpaths arriving at stop p_i.\n", "\n", "As we cannot store different data types in numpy arras, `time` will have to be converted to `np.timedelta64`, the format used to make differences between `np.datetime.64` variables. We will consider all `time` values as **positive values in seconds**." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(6264, 2)\n" ] }, { "data": { "text/plain": [ "array([[ 815, 267],\n", " [1350, 569],\n", " [ 63, 470],\n", " ...,\n", " [1113, 382],\n", " [1122, 338],\n", " [1270, 553]], dtype=uint16)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "transfers = pkload(\"../data/transfer_array_cyril.pkl\").astype(np.uint16)\n", "print(transfers.shape)\n", "transfers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## stops\n", "\n", "`stops` stores the indices in `stopRoutes` and in `transfers` corresponding to each stop.\n", "\n", "`stopRoutes[stops[p][0]:stops[p][1]]` returns the routes serving stop p.\n", "\n", "`transfers[stops[p][2]:stops[p][3]]` returns the footpaths arriving at stop p." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 0 75]\n", "[0 0]\n", "(1407, 2)\n", "[ 0 0 0 75]\n" ] }, { "data": { "text/plain": [ "array([[ 0, 11, 0, 2],\n", " [ 11, 20, 2, 7],\n", " [ 20, 38, 7, 22],\n", " ...,\n", " [15303, 15334, 6242, 6250],\n", " [15334, 15339, 6250, 6257],\n", " [15339, 15344, 6257, 6264]], dtype=uint32)" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stops = pkload(\"../data/stops_array_cyril.pkl\")\n", "print(np.isnan(stops.astype(np.float64)).sum(axis=0))\n", "print(np.equal(stops, None).sum(axis=0))\n", "print(stops.shape)\n", "stops = stops[:,[0,0,1,1]]\n", "# Make column 1 contain the start_index of the next stop in stopRoutes\n", "stops[:-1,1] = stops[1:,0]\n", "stops[-1, 1] = stopRoutes.shape[0]\n", "# Make column 2 contain the start_index of the next stop in stopRoutes\n", "for i in np.isnan(stops[:,2].astype(np.float64)).nonzero()[0]:\n", " stops[i,2] = stops[i-1,2]\n", "print(np.isnan(stops.astype(np.float64)).sum(axis=0))\n", "stops[:-1,3] = stops[1:,2]\n", "stops[-1, 3] = transfers.shape[0]\n", "# Convert to int\n", "stops = stops.astype(np.uint32)\n", "stops" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "routes serving stop 0: [ 17 116 126 144 169 250 267 356 573 617 1054]\n", "stops of route 17: [ 168 1365 504 434 715 454 1236 186 959 81 130 774 284 958\n", " 815 0 1350 265 780 305 490 16 180 1397]\n", "stops of route 116: [1397 180 16 490 305 780 265 1350 0 815 958 284 774 130\n", " 81 959 186 1236 454]\n", "stops of route 126: [ 0 1350 265 780 490 16 180 1397]\n", "stops of route 144: [1397 180 16 490 780 265 1350 0 815 958 284 774 130 81\n", " 959 186 1236 454 715 434 504 1365 168]\n", "stops of route 169: [ 0 1350 265 780 305 490 16 180 1397]\n", "stops of route 250: [ 0 815 958 284 774 130 81 959 186 1236 454 715 434 504\n", " 1365 168]\n", "stops of route 267: [1397 180 16 490 780 265 1350 0 815 958 284 774 130 81\n", " 959 186 1236 454]\n", "stops of route 356: [1397 180 16 490 305 780 265 1350 0 815 958 284 774 130\n", " 81 959 186 1236 454 715 434 504 1365 168]\n", "stops of route 573: [ 454 1236 186 959 81 130 774 284 958 815 0 1350 265 780\n", " 490 16 180 1397]\n", "stops of route 617: [ 454 1236 186 959 81 130 774 284 958 815 0 1350 265 780\n", " 305 490 16 180 1397]\n", "stops of route 1054: [ 168 1365 504 434 715 454 1236 186 959 81 130 774 284 958\n", " 815 0 1350 265 780 490 16 180 1397]\n", "stop 0 can be reached from stop 815 by walking for 267 seconds.\n", "stop 0 can be reached from stop 1350 by walking for 569 seconds.\n" ] } ], "source": [ "p = 0\n", "routes_serving_p = stopRoutes[stops[p][0]:stops[p][1]]\n", "print(\"routes serving stop 0:\", routes_serving_p)\n", "for r in routes_serving_p:\n", " print(\"stops of route {}:\".format(r), routeStops[routes[r][2]:routes[r][2]+routes[r][1]])\n", "for pPrime, walking_seconds in transfers[stops[p][2]:stops[p][3]]:\n", " print(\"stop {} can be reached from stop {} by walking for {} seconds.\".format(p, pPrime, walking_seconds))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Distribution of delays" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0.03314917, 0.88950276, 0.97237569, 0.98895028, 0.98895028,\n", " 0.98895028, 0.99447514, 0.99447514, 0.99447514, 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. ],\n", " [0. , 0.85082873, 0.95027624, 0.98895028, 0.98895028,\n", " 0.98895028, 0.98895028, 0.99447514, 0.99447514, 0.99447514,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. ]])" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import gzip \n", "with gzip.open(\"../data/join_distribution_cumulative_p_3.pkl.gz\") as distrib_pkl:\n", " distrib_delays = pickle.load(distrib_pkl)\n", " \n", "distrib_delays[0:2]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Relate `stop_id`s and `trip_headsign`s to the integer indices used in the algorithm" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
route_idstop_id_generalstop_nametrip_headsignroute_intstop_int
026-13-j19-18576240Zürich, MeierhofplatzZürich, Albisgütli01221
126-13-j19-18591353Zürich, SchwertZürich, Albisgütli0816
226-13-j19-18591039Zürich, Alte TrotteZürich, Albisgütli0776
326-13-j19-18591121Zürich, EschergutwegZürich, Albisgütli0307
426-13-j19-18591417Zürich, WaidfusswegZürich, Albisgütli0347
\n", "
" ], "text/plain": [ " route_id stop_id_general stop_name trip_headsign \\\n", "0 26-13-j19-1 8576240 Zürich, Meierhofplatz Zürich, Albisgütli \n", "1 26-13-j19-1 8591353 Zürich, Schwert Zürich, Albisgütli \n", "2 26-13-j19-1 8591039 Zürich, Alte Trotte Zürich, Albisgütli \n", "3 26-13-j19-1 8591121 Zürich, Eschergutweg Zürich, Albisgütli \n", "4 26-13-j19-1 8591417 Zürich, Waidfussweg Zürich, Albisgütli \n", "\n", " route_int stop_int \n", "0 0 1221 \n", "1 0 816 \n", "2 0 776 \n", "3 0 307 \n", "4 0 347 " ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stop_times_df = pkload(\"../data/stop_times_df_cyril.pkl\")[['route_id', 'stop_id_general', 'stop_name', 'trip_headsign', 'route_int', 'stop_int']]\n", "stop_times_df.head()" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[8502508 8503078 8503088 ... 8591173 8590772 8503509] ['Spreitenbach, Raiacker' 'Waldburg' 'Zürich HB SZU' ...\n", " 'Zürich, Haldenbach' 'Rüschlikon, Belvoir' 'Schlieren']\n" ] } ], "source": [ "stop_ids_names = stop_times_df[['stop_id_general', 'stop_int', 'stop_name']].drop_duplicates()\n", "assert np.all(stop_ids_names == stop_ids_names.drop_duplicates(subset='stop_int'))\n", "assert np.all(stop_ids_names == stop_ids_names.drop_duplicates(subset='stop_id_general'))\n", "assert np.all(stop_ids_names == stop_ids_names.drop_duplicates(subset='stop_name'))\n", "stop_ids_names = stop_ids_names.sort_values(by='stop_int')\n", "stop_ids = stop_ids_names['stop_id_general'].to_numpy()\n", "stop_names = stop_ids_names['stop_name'].to_numpy()\n", "print(stop_ids, stop_names)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1218 8590901 Wangen b D'dorf, Dorfplatz\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
route_idstop_id_generalstop_nametrip_headsignroute_intstop_int
2205926-759-j19-18590901Wangen b D'dorf, DorfplatzWangen b D'dorf, Dorfplatz1321218
3086926-759-j19-18590901Wangen b D'dorf, DorfplatzZürich Flughafen, Bahnhof1671218
\n", "
" ], "text/plain": [ " route_id stop_id_general stop_name \\\n", "22059 26-759-j19-1 8590901 Wangen b D'dorf, Dorfplatz \n", "30869 26-759-j19-1 8590901 Wangen b D'dorf, Dorfplatz \n", "\n", " trip_headsign route_int stop_int \n", "22059 Wangen b D'dorf, Dorfplatz 132 1218 \n", "30869 Zürich Flughafen, Bahnhof 167 1218 " ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "p = np.random.randint(stops.shape[0])\n", "print(p, stop_ids[p], stop_names[p])\n", "stop_times_df[stop_times_df['stop_int'] == p].head(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Implementing the reversed Multiple Criteria RAPTOR\n", "\n", "Based on modified version of RAPTOR (reversed RAPTOR), we implement a multiple criteria RAPTOR algorithm.\n", "The optimization criteria are:\n", "- Latest departure\n", "- Highest probability of success of the entire trip\n", "- Lowest number of connections (implicit with the round-based approach)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "numpy.datetime64('2020-05-11T15:28')" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# absolute constants:\n", "\n", "tau_change_platform = np.timedelta64(2, 'm')\n", "np.datetime64('2020-05-11T15:30') - tau_change_platform" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "# helper functions\n", "\n", "def calc_stopTimes_idx(r, t, offset_p):\n", " \"\"\"Returns the index of the entry in stopTimes\n", " corresponding to the offset_p-th stop of the t-th trip\n", " of route r.\n", " \"\"\"\n", " return (routes[r][3] # 1st trip of route\n", " + t * routes[r][1] # offset for the right trip\n", " + offset_p # offset for the right stop\n", " )\n", "\n", "def arrival_time(r, t, offset_p):\n", " \"\"\"Returns 2000 (instead of 0) if t is None.\n", " Otherwise, returns the arrival time of the t-th trip of route r\n", " at the offset_p-th stop of route r.\n", " trips and stops of route r start at t=0, offset_p=0.\n", " \"\"\"\n", " if t is None:\n", " return np.datetime64('2000-01-01T01:00')\n", " \n", " return stopTimes[calc_stopTimes_idx(r,t,offset_p)][0] # 0 for arrival time\n", "\n", "def departure_time(r, t, offset_p):\n", " \"\"\"Throws TypeError if t is None.\n", " Otherwise, returns the departure time of the t-th trip of route r\n", " at the offset_p-th stop of route r.\n", " trips and stops of route r start at t=0 & offset_p=0.\n", " \"\"\"\n", " if t is None:\n", " raise TypeError(\"Requested departure time of None trip!\")\n", " \n", " return stopTimes[calc_stopTimes_idx(r,t,offset_p)][1] # 1 for departure time\n", "\n", "def get_stops(r):\n", " \"\"\"Returns the stops of route r\"\"\"\n", " idx_first_stop = routes[r][2]\n", " return routeStops[idx_first_stop:idx_first_stop+routes[r][1]] # n_stops = routes[r][1] " ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [], "source": [ "class InstantiationException(Exception):\n", " pass\n", "\n", "class BaseLabel:\n", " \"\"\"An abstract base class for Labels. Do not instantiate.\n", " A label corresponds to a recursive (partial) solution, going\n", " to the target stop from the stop currently under consideration.\n", " \"\"\"\n", " def __init__(self, stop, tau_dep, Pr):\n", " self.stop = stop\n", " self.tau_dep = tau_dep\n", " self.Pr = Pr\n", " \n", " def dominates(self, other):\n", " \"\"\"Returns True if self dominates other, else returns False.\n", " other: another Label instance.\n", " \"\"\"\n", " if self.tau_dep >= other.tau_dep and self.Pr >= other.Pr:\n", " return True\n", " return False\n", " \n", " def print_journey(self):\n", " print(\"Journey begins at stop {stopN} ({stopI}) at time {tau}, with an \"\n", " \"overall probability of success = {Pr} \\n\".format(\n", " stopN = stop_names[self.stop],\n", " stopI = stop_ids[ self.stop],\n", " tau = self.tau_dep,\n", " Pr = self.Pr\n", " )\n", " )\n", " self.print_instructions()\n", " \n", " def to_str(self):\n", " s = \"Departure at {0} from stop {1} (id: {2}, int: {3}).\".format(\n", " self.tau_dep,\n", " stop_names[self.stop],\n", " stop_ids[self.stop],\n", " self.stop\n", " )\n", " return repr(type(self)) + s\n", " \n", " def pprint(self, indent=''):\n", " print(indent, self.to_str())\n", " \n", " def copy(self):\n", " raise InstantiationException(\"class BaseLabel should never \"\n", " \"be instantiated.\"\n", " )\n", "\n", "class ImmutableLabel(BaseLabel):\n", " \"\"\"Base class for immutable Labels\"\"\"\n", " def copy(self):\n", " return self\n", "\n", "class TargetLabel(ImmutableLabel):\n", " \"\"\"A special type of label reserved for the target stop.\"\"\"\n", " def __init__(self, stop, tau_dep):\n", " BaseLabel.__init__(self, stop, tau_dep, 1.)\n", " \n", " def print_instructions(self):\n", " \"\"\"Finish printing instructions for the journey.\"\"\"\n", " print(\"You have arrived at the target stop {stopN} ({stopI}) \"\n", " \"before the target time of {tau}.\".format(\n", " stopN=stop_names[self.stop],\n", " stopI=stop_ids[self.stop],\n", " tau=self.tau_dep\n", " ))\n", "\n", "class WalkLabel(ImmutableLabel):\n", " \"\"\"A special type of label for walking connections.\"\"\"\n", " def __init__(self, stop, tau_walk, next_label):\n", " \"\"\"Creat a new WalkLabel instance.\n", " stop: stop where you start walking.\n", " tau_walk: (np.timedelta64) duration of the walk.\n", " next_label: label describing the rest of the trip after walking.\n", " \"\"\"\n", " if isinstance(next_label, WalkLabel):\n", " raise ValueError(\"Cannot chain two consecutive WalkLabels!\")\n", " tau_dep = next_label.tau_dep - tau_walk - tau_change_platform\n", " BaseLabel.__init__(self, stop, tau_dep, next_label.Pr)\n", " self.tau_walk = tau_walk\n", " self.next_label = next_label\n", " \n", " def print_instructions(self):\n", " \"\"\"Recursively print instructions for the whole journey.\"\"\"\n", " print(\"Walk {tau} seconds from stop {name1} ({id1}) to stop \"\n", " \"{name2} ({id2}).\".format(\n", " tau = self.tau_walk.astype(int),\n", " name1 = stop_names[self.stop],\n", " name2 = stop_names[self.next_label.stop],\n", " id1 = stop_ids[self.stop],\n", " id2 = stop_ids[self.next_label.stop]\n", " ))\n", " self.next_label.print_instructions()\n", "\n", "class RouteLabel(BaseLabel):\n", " \"\"\"A type of label for regular transports.\"\"\"\n", " def __init__(self,\n", " tau_dep,\n", " r,\n", " t,\n", " offset_p,\n", " next_label,\n", " Pr_connection_success):\n", " \n", " self.tau_dep = tau_dep\n", " self.r = r\n", " self.t = t\n", " self.offset_p = offset_p\n", " self.next_label = next_label\n", " # Store Pr_connection_success for self.copy()\n", " self.Pr_connection_success = Pr_connection_success\n", " \n", " self.route_stops = get_stops(self.r)\n", " self.stop = self.route_stops[self.offset_p]\n", " self.Pr = self.Pr_connection_success * self.next_label.Pr\n", " \n", " \n", " Pr = Pr_connection_success * next_label.Pr\n", " stop = self.route_stops[self.offset_p]\n", " \n", " def update_stop(self, stop):\n", " self.stop = stop\n", " self.offset_p = self.offset_p - 1\n", " # Sanity check:\n", " assert self.offset_p >= 0\n", " assert self.route_stops[self.offset_p] == stop\n", " self.tau_dep = departure_time(self.r, self.t, self.offset_p)\n", " \n", " def print_instructions(self):\n", " \"\"\"Recursively print instructions for the whole journey.\"\"\"\n", " print(\" \"*4 + \"At stop {stopN} ({stopI}), take route {r}\"\n", " \" headed to {head} at time \"\n", " \"{tau}.\".format(stopN=stop_names[self.stop],\n", " stopI=stop_ids[self.stop],\n", " r=self.r,\n", " head=stop_times_df['trip_headsign'][calc_stopTimes_idx(self.r,\n", " self.t,\n", " self.offset_p\n", " )],\n", " tau=self.tau_dep\n", " )\n", " )\n", " tau_arr = arrival_time(\n", " self.r,\n", " self.t,\n", " np.where(self.route_stops == self.next_label.stop)\n", " )\n", " print(\" \"*4 + \"Get out at stop {stop} at time {tau}\"\n", " \".\".format(stop=self.next_label.stop, tau=tau_arr)\n", " )\n", " self.next_label.print_instructions()\n", " \n", " def copy(self):\n", " \"\"\"When RouteLabels are merged into the bag of a stop,\n", " they must be copied (because they will subsequently\n", " be changed with self.update_stop()).\n", " \"\"\"\n", " return RouteLabel(self.tau_dep,\n", " self.r,\n", " self.t,\n", " self.offset_p,\n", " self.next_label,\n", " self.Pr_connection_success\n", " )" ] }, { "cell_type": "code", "execution_count": 63, "metadata": {}, "outputs": [], "source": [ "def run_mc_raptor(p_s, p_t, tau_0, Pr_min\n", " , incoherences\n", " ):\n", " \"\"\"Run MC RAPTOR, using the data defined in cells above (stopRoutes etc.).\n", " Inputs:\n", " p_s: source stop\n", " p_t: target stop\n", " tau_0: latest acceptable arrival time\n", " Pr_min: minimum acceptable probability of success\n", " Output:\n", " bags_p_s: bags_p_s[k] contains the pareto set of non-dominated journeys\n", " from p_s to p_t that use at most k different trips (i.e,\n", " getting in at most k different vehicles), under the given\n", " constraints:\n", " 1. Each journey must succeed with a probability\n", " greater or equal to Pr_min.\n", " 2. The journey is a success if and only if all individual\n", " connections succeed, including the user's appointment\n", " in p_t at tau_0.\n", " 3. A connection succeeds if, and only if, the user reaches\n", " the platform before or on the scheduled departure time\n", " (allowing some time to change platforms)\n", " Non-dominated:\n", " A journey J1 is *dominated* by another journey J2, if\n", " J2 departs no earlier than J1 AND the probability of\n", " success of J2 is no less than that of J1.\n", " Pareto set:\n", " Each bag in bags_p_s contains only journeys that are not\n", " dominated by any other possible journey. Such a collection\n", " of non-dominated solutions is called a *Pareto set*.\n", " \n", " Each journey is represented as a Label that forms the start of a chain.\n", " The journey can be reconstructed by calling label.print_journey().\n", " \"\"\"\n", "# initialization\n", " # For each route and for each label at each stop p, we will look at the n latest\n", " # trips until we find a trip for which the individual connection at stop p\n", " # succeeds with a probability at least equal to Pr_threshold.\n", " # Under some reasonable assumptions, setting Pr_threshold = Pr_min**(1/k)\n", " # guarantees that we will find a solution, if a solution exists involving at\n", " # most k connections (including the user's appointment in p_t at tau_0).\n", " Pr_threshold = Pr_min**(0.1)\n", " \n", "\n", " # Initialize empty bags for each stop for round 0:\n", " n_stops = stops.shape[0]\n", " bags = [\n", " [\n", " [] # an empty bag\n", " for _ in range(n_stops)] # one empty bag per stop\n", " ]\n", "\n", " # Create a TargetLabel for p_t, and mark p_t\n", " bags[0][p_t].append(TargetLabel(p_t, tau_0))\n", " marked = {p_t}\n", "\n", "# Define bag operations (they depend on p_s and Pr_min for target pruning):\n", " def update_bag(bag, label, k):\n", " \"\"\"Add label to bag and remove dominated labels.\n", " bag is altered in-place.\n", "\n", " k: Round number, used for target pruning.\n", "\n", " returns: Boolean indicating whether bag was altered.\n", " \"\"\"\n", " # Apply the Pr_min constraint to label:\n", " if label.Pr < Pr_min:\n", " return False\n", "\n", " # Prune label if it is dominated by bags[k][p_s]:\n", " for L_star in bags[k][p_s]:\n", " if L_star.dominates(label):\n", " return False\n", "\n", " # Otherwise, merge label into bag1\n", " changed = False\n", " for L_old in bag:\n", " if L_old.dominates(label):\n", " return changed\n", " if label.dominates(L_old):\n", " bag.remove(L_old)\n", " changed = True\n", " bag.append(label.copy())\n", " return True\n", "\n", " def merge_bags(bag1, bag2, k):\n", " \"\"\"Merge bag2 into bag1 in-place.\n", " k: Round number, used for target pruning.\n", " returns: Boolean indicating whether bag was altered.\n", " \"\"\"\n", " changed = False\n", " for label in bag2:\n", " changed = changed or update_bag(bag1, label, k)\n", " return changed\n", " \n", "# Define the footpaths-checking function (depends on update_bag)\n", " def check_footpaths(bags, marked, k):\n", " \"\"\"Modify bags and marked in-place to account for foot-paths.\"\"\"\n", " q = []\n", " for p in marked:\n", " for pPrime, delta_seconds in transfers[stops[p][2]:stops[p][3]]:\n", " q.append((p, pPrime, delta_seconds))\n", " for p, pPrime, delta_seconds in q:\n", " for L_k in bags[k][p]:\n", " # We do not allow two consecutive walking trips\n", " if not isinstance(L_k, WalkLabel):\n", " L_new = WalkLabel(pPrime, np.timedelta64(delta_seconds, 's'), L_k)\n", " if update_bag(bags[k][pPrime], L_new, k):\n", " marked.add(pPrime)\n", "\n", "# main loop\n", " indent= ' '*4\n", "\n", " k = 0\n", " # Check footpaths leading to p_t at k=0:\n", " check_footpaths(bags, marked, k)\n", " while True:\n", " k += 1 # k=1 at fist round, as it should.\n", "\n", " # Instead of using best bags, carry over the bags from last round.\n", " # if len(bags <= k):\n", "\n", " bags.append([bags[-1][p].copy() for p in range(n_stops)])\n", "\n", " print('\\n******************************STARTING round k={}******************************'.format(k))\n", " # accumulate routes serving marked stops from previous rounds\n", " q = []\n", " print('Marked stops at the start of the round: {}'.format(marked))\n", " for p in marked:\n", " for r in stopRoutes[stops[p][0]:stops[p][1]]: # foreach route r serving p\n", " append_r_p = True\n", " for idx, (rPrime, pPrime) in enumerate(q): # is there already a stop from the same route in q ?\n", " if rPrime == r:\n", " append_r_p = False\n", " p_pos_in_r = np.where(get_stops(r) == p)[0][-1]\n", " pPrime_pos_in_r = np.where(get_stops(r) == pPrime)[0][-1]\n", " if p_pos_in_r > pPrime_pos_in_r:\n", " q[idx] = (r, p) # substituting (rPrime, pPrime) by (r, p)\n", " if append_r_p:\n", " q.append((r, p))\n", " marked.clear() # unmarking all stops\n", "# print(\"Queue:\", q)\n", "\n", "# print('Queue before traversing each route: {}'.format(q))\n", " # traverse each route\n", " for (r, p) in q:\n", "# print('\\n****TRAVERSING ROUTE r={0} from stop p={1}****'.format(r, p))\n", " B_route = [] # new empty route bag\n", "\n", " # we traverse the route backwards (starting at p, not from the end of the route)\n", " stops_of_current_route = get_stops(r)\n", "# print('Stops of current route:', stops_of_current_route)\n", " offset_p = np.asarray(stops_of_current_route == p).nonzero()[0]\n", " if offset_p.size < 1:\n", " if not p in incoherences:\n", " incoherences[p] = set()\n", " incoherences[p].add(r)\n", "# print(\"WARNING: route {r} is said to serve stop {p} in stopRoutes, but stop {p} \"\n", "# \"is not included as a stop of route {r} in routeStops...\".format(p=p, r=r))\n", " offset_p = -1\n", " else:\n", " offset_p = offset_p[-1]\n", " for offset_p_i in range(offset_p, -1, -1):\n", " p_i = stops_of_current_route[offset_p_i]\n", "# print('\\n\\n'+indent+\"p_i: {}\".format(p_i))\n", "\n", " # Update the labels of the route bag:\n", " for L in B_route:\n", " L.update_stop(p_i)\n", "\n", " # Merge B_route into bags[k][p_i]\n", " if merge_bags(bags[k][p_i], B_route, k):\n", "# print(\"marking stop\", p_i)\n", " marked.add(p_i)\n", "\n", " # Can we step out of a later trip at p_i ?\n", " # This is only possible if we already know a way to get from p_i to p_t in < k vehicles\n", " # (i.e., if there is at least one label in bags[k][p_i])\n", " for L_k in bags[k][p_i]:\n", " # Note that k starts at 1 and bags[0][p_t] contains a TargetLabel.\n", "# print('\\n'+indent+'----scanning arrival times for route r={0} at stop p_i={1}----'.format(r, p_i))\n", "\n", " # We check the trips from latest to earliest\n", " for t in range(routes[r][0]-1, -1, -1): # n_trips = routes[r][0]\n", " # Does t_r arrive early enough for us to make the rest \n", " # of the journey from here (tau[k-1][p_i])?\n", " tau_arr = arrival_time(r, t, offset_p_i)\n", "# print(indent+'arrival time: ', tau_arr)\n", " if tau_arr <= L_k.tau_dep - tau_change_platform:\n", "\n", " max_delay = L_k.tau_dep - tau_arr - tau_change_platform\n", " max_delay_int = min(max_delay.astype('timedelta64[m]').astype('int'), 30)\n", " \n", " Pr_connection = distrib_delays[calc_stopTimes_idx(r, t, offset_p_i),\n", " max_delay_int + 1]\n", "# print(Pr_connection)\n", " L_new = RouteLabel(departure_time(r, t, offset_p_i),\n", " r,\n", " t,\n", " offset_p_i,\n", " L_k,\n", " Pr_connection\n", " )\n", " update_bag(B_route, L_new, k)#:\n", "# print(indent+\"Explored connection from\")\n", "# L_new.pprint(indent*2)\n", "# print(indent+\"to\")\n", "# L_k.pprint(indent*2)\n", "\n", " # We don't want to add a label for every trip that's earlier than tau_dep.\n", " # Instead, we stop once we've found a trip that's safe enough.\n", " if Pr_connection > Pr_threshold:\n", " break\n", " \n", "# print(marked)\n", " # Look at foot-paths (bags and marked are altered in-place):\n", " check_footpaths(bags, marked, k)\n", "# print(marked)\n", " # stopping criteria (1: reached equilibrium, 2: Found a solution with k-2 trips)\n", " if not marked:\n", " print(\"\\n\" + \"*\"*15 + \" THE END \" + \"*\"*15)\n", " print(\"Equilibrium reached. The end.\")\n", " break\n", "# if k > 10:\n", "# break\n", " if k>2:\n", " if bags[k-2][p_s]:\n", " print(\"\\n\" + \"*\"*15 + \" THE END \" + \"*\"*15)\n", " print(\"There is a solution with {0} connections. We shall not \"\n", " \"search for solutions with {1} or more connections\"\n", " \".\".format(k-3, k)\n", " )\n", " break\n", " return [bags[K][p_s] for K in range(len(bags))], bags\n", "\n", "def time_sorted_unique(bags_p):\n", " \"\"\"Input: list of bags, e.g. for one stop and various k.\n", " It is assumed that Pr >= Pr_min for each label.\n", " Output: A np.array of unique labels, sorted by decreasing\n", " departure time.\n", " \"\"\"\n", " res = set()\n", " for bag in bags_p:\n", " for label in bag:\n", " res.add(label)\n", " res = np.array(list(res))\n", " return res[np.argsort([label.tau_dep for label in res])[::-1]]\n", "\n", "def print_solutions(bags_p):\n", " for i, label in enumerate(time_sorted_unique(bags_p)):\n", " print('\\n'*2,'-'*10, 'OPTION', i+1)\n", " label.print_journey()" ] }, { "cell_type": "code", "execution_count": 64, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1033\n", "\n", "******************************STARTING round k=1******************************\n", "Marked stops at the start of the round: {2, 616, 1033, 1290, 1196, 272, 785, 880, 947, 820, 1173, 1268, 1080, 602, 1404, 543}\n", "\n", "******************************STARTING round k=2******************************\n", "Marked stops at the start of the round: {3, 4, 5, 6, 7, 10, 11, 12, 13, 14, 16, 18, 19, 20, 22, 25, 26, 28, 30, 32, 37, 40, 41, 43, 48, 49, 51, 52, 53, 56, 57, 58, 59, 66, 67, 70, 71, 74, 75, 76, 78, 79, 81, 82, 84, 86, 87, 89, 91, 92, 96, 97, 98, 99, 107, 108, 112, 114, 115, 116, 117, 118, 119, 120, 124, 128, 130, 131, 132, 133, 134, 136, 137, 138, 139, 142, 144, 146, 147, 148, 149, 153, 155, 156, 160, 162, 165, 166, 167, 168, 170, 171, 173, 174, 175, 176, 177, 180, 187, 189, 190, 193, 195, 196, 197, 198, 203, 204, 205, 206, 207, 211, 214, 216, 221, 225, 226, 227, 231, 232, 233, 237, 238, 239, 243, 244, 245, 247, 248, 250, 253, 256, 258, 259, 262, 264, 266, 267, 271, 273, 275, 276, 279, 283, 290, 294, 296, 298, 299, 300, 301, 302, 306, 307, 309, 314, 319, 320, 325, 326, 328, 330, 332, 336, 339, 340, 341, 343, 346, 347, 348, 353, 354, 356, 357, 358, 360, 362, 363, 364, 366, 368, 370, 371, 372, 373, 375, 384, 385, 386, 388, 389, 390, 391, 393, 394, 396, 397, 402, 404, 405, 410, 411, 412, 415, 416, 417, 421, 425, 427, 428, 429, 433, 436, 437, 440, 441, 443, 444, 445, 446, 448, 450, 452, 455, 461, 462, 463, 464, 465, 467, 469, 470, 473, 478, 483, 487, 494, 495, 496, 503, 505, 509, 510, 513, 514, 518, 519, 520, 521, 522, 525, 527, 528, 532, 533, 535, 537, 539, 543, 544, 547, 552, 556, 557, 558, 559, 561, 566, 569, 573, 575, 577, 579, 583, 584, 587, 590, 591, 594, 597, 599, 601, 602, 606, 607, 608, 609, 611, 613, 614, 615, 618, 619, 621, 623, 630, 631, 632, 634, 639, 642, 643, 649, 653, 654, 656, 657, 660, 663, 664, 666, 667, 668, 669, 670, 671, 672, 673, 677, 678, 679, 681, 682, 684, 685, 686, 687, 688, 689, 690, 692, 695, 697, 702, 704, 707, 710, 711, 712, 713, 716, 720, 721, 723, 726, 728, 733, 734, 736, 737, 738, 739, 743, 744, 746, 747, 749, 751, 755, 757, 759, 760, 761, 762, 766, 767, 768, 769, 771, 773, 775, 776, 779, 781, 782, 786, 790, 792, 794, 795, 796, 798, 800, 803, 804, 807, 808, 812, 816, 818, 819, 821, 822, 824, 825, 826, 829, 830, 832, 835, 841, 842, 847, 848, 849, 850, 851, 852, 853, 855, 857, 858, 864, 871, 873, 875, 877, 879, 881, 886, 889, 890, 891, 892, 895, 897, 903, 906, 907, 910, 911, 917, 918, 919, 922, 925, 931, 933, 937, 940, 943, 944, 945, 946, 949, 950, 952, 955, 956, 962, 963, 968, 969, 971, 975, 980, 982, 983, 984, 985, 986, 990, 991, 993, 997, 999, 1000, 1004, 1005, 1006, 1008, 1010, 1013, 1014, 1015, 1017, 1022, 1024, 1025, 1026, 1030, 1031, 1032, 1035, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1051, 1052, 1055, 1057, 1059, 1060, 1062, 1063, 1064, 1066, 1071, 1073, 1075, 1081, 1083, 1084, 1094, 1097, 1099, 1102, 1103, 1107, 1110, 1112, 1113, 1117, 1122, 1123, 1124, 1125, 1128, 1131, 1133, 1134, 1135, 1139, 1146, 1147, 1149, 1152, 1153, 1154, 1156, 1157, 1158, 1163, 1164, 1166, 1168, 1175, 1177, 1180, 1181, 1184, 1185, 1187, 1188, 1189, 1190, 1192, 1193, 1197, 1198, 1202, 1203, 1205, 1206, 1208, 1209, 1212, 1214, 1215, 1216, 1217, 1221, 1223, 1226, 1229, 1230, 1233, 1236, 1237, 1238, 1241, 1243, 1244, 1246, 1248, 1249, 1250, 1253, 1254, 1265, 1267, 1270, 1274, 1276, 1277, 1280, 1283, 1284, 1288, 1291, 1293, 1294, 1297, 1298, 1302, 1303, 1306, 1311, 1312, 1313, 1315, 1318, 1322, 1323, 1324, 1325, 1327, 1332, 1334, 1336, 1337, 1344, 1345, 1347, 1349, 1351, 1352, 1353, 1354, 1355, 1356, 1357, 1360, 1363, 1369, 1370, 1373, 1374, 1375, 1376, 1378, 1381, 1385, 1386, 1387, 1388, 1389, 1391, 1392, 1396, 1397, 1399, 1405, 1406}\n", "\n", "******************************STARTING round k=3******************************\n", "Marked stops at the start of the round: {0, 1, 3, 4, 9, 11, 13, 14, 15, 17, 18, 19, 20, 21, 22, 25, 28, 29, 30, 31, 33, 34, 40, 46, 48, 49, 50, 51, 56, 57, 58, 65, 66, 67, 69, 70, 71, 76, 77, 78, 79, 83, 84, 85, 89, 90, 91, 96, 98, 104, 107, 108, 114, 116, 122, 124, 125, 127, 131, 133, 137, 138, 139, 140, 141, 147, 149, 150, 152, 153, 154, 156, 157, 160, 161, 162, 168, 174, 175, 176, 177, 179, 181, 186, 187, 188, 189, 190, 192, 193, 194, 197, 198, 212, 213, 215, 216, 219, 220, 223, 225, 226, 227, 231, 232, 238, 240, 247, 250, 255, 256, 257, 260, 262, 264, 266, 267, 268, 270, 271, 273, 274, 275, 284, 285, 294, 297, 299, 300, 301, 302, 305, 307, 314, 319, 320, 324, 325, 326, 331, 332, 333, 336, 341, 343, 346, 347, 352, 353, 356, 365, 368, 371, 372, 373, 376, 380, 381, 382, 384, 385, 386, 388, 389, 393, 394, 395, 396, 401, 404, 418, 420, 426, 429, 432, 433, 437, 444, 445, 446, 448, 449, 451, 452, 454, 455, 458, 459, 461, 462, 463, 470, 471, 480, 482, 483, 487, 490, 491, 494, 495, 497, 501, 511, 519, 520, 521, 522, 525, 526, 529, 535, 546, 549, 552, 557, 558, 559, 560, 561, 566, 569, 570, 571, 573, 583, 584, 587, 590, 598, 599, 600, 604, 607, 608, 609, 611, 614, 617, 623, 624, 625, 631, 632, 634, 639, 641, 647, 648, 649, 650, 652, 653, 654, 655, 656, 658, 661, 663, 664, 666, 670, 671, 672, 673, 677, 678, 679, 681, 683, 686, 688, 689, 690, 693, 695, 697, 701, 702, 704, 705, 710, 711, 713, 715, 716, 719, 722, 725, 727, 728, 730, 735, 736, 737, 738, 739, 743, 746, 748, 753, 758, 759, 760, 762, 763, 767, 768, 769, 771, 773, 774, 776, 780, 786, 787, 791, 792, 793, 794, 797, 802, 803, 807, 809, 812, 815, 816, 821, 832, 833, 837, 838, 839, 840, 845, 846, 847, 849, 851, 852, 853, 864, 871, 874, 879, 881, 888, 890, 891, 893, 905, 910, 911, 912, 918, 919, 920, 922, 926, 928, 929, 930, 931, 941, 943, 945, 946, 951, 952, 954, 956, 958, 959, 961, 962, 963, 964, 968, 970, 975, 976, 982, 983, 986, 988, 990, 993, 996, 997, 1000, 1003, 1004, 1008, 1014, 1015, 1016, 1017, 1022, 1024, 1025, 1029, 1031, 1032, 1037, 1038, 1039, 1040, 1051, 1052, 1054, 1057, 1059, 1062, 1065, 1066, 1081, 1083, 1085, 1086, 1090, 1099, 1101, 1102, 1103, 1106, 1108, 1110, 1114, 1116, 1117, 1119, 1124, 1125, 1127, 1131, 1133, 1134, 1135, 1141, 1142, 1148, 1156, 1158, 1164, 1166, 1167, 1168, 1169, 1170, 1171, 1177, 1179, 1181, 1182, 1185, 1186, 1189, 1190, 1192, 1194, 1201, 1204, 1205, 1206, 1212, 1214, 1215, 1216, 1217, 1221, 1222, 1223, 1228, 1229, 1231, 1233, 1234, 1238, 1243, 1244, 1246, 1247, 1248, 1251, 1253, 1254, 1255, 1256, 1261, 1263, 1264, 1265, 1274, 1278, 1279, 1280, 1283, 1288, 1293, 1294, 1302, 1304, 1305, 1306, 1307, 1308, 1313, 1317, 1318, 1324, 1326, 1327, 1329, 1334, 1336, 1343, 1349, 1350, 1352, 1353, 1355, 1357, 1358, 1362, 1365, 1379, 1381, 1382, 1385, 1386, 1387, 1388, 1389, 1391, 1394, 1398, 1399, 1401, 1403, 1406}\n", "\n", "*************** THE END ***************\n", "There is a solution with 0 connections. We shall not search for solutions with 3 or more connections.\n" ] } ], "source": [ "p_s = 1000 # start stop\n", "p_t = np.random.randint(stops.shape[0]) # target stop\n", "p_t = 1033\n", "print(p_t)\n", "tau_0 = np.datetime64('2020-05-24T17:30') # arrival time\n", "Pr_min = 0.9\n", "incoherences = {}\n", "bags_p_s, bags = run_mc_raptor(p_s, p_t, tau_0, Pr_min\n", " , incoherences\n", " )" ] }, { "cell_type": "code", "execution_count": 65, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{}" ] }, "execution_count": 65, "metadata": {}, "output_type": "execute_result" } ], "source": [ "incoherences" ] }, { "cell_type": "code", "execution_count": 66, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", " ---------- OPTION 1\n", "Journey begins at stop Zürich, Tüffenwies (8591402) at time 2020-05-24T16:56:00.000000000, with an overall probability of success = 0.991869918699187 \n", "\n", " At stop Zürich, Tüffenwies (8591402), take route 1315 headed to Zürich, Albisgütli at time 2020-05-24T16:56:00.000000000.\n", " Get out at stop 880 at time [['2020-05-24T17:10:00.000000000' '2020-05-24T17:10:00.000000000']].\n", "Walk 470 seconds from stop Zürich, Sihlquai/HB (8591368) to stop Zürich, Stampfenbachplatz (8591379).\n", "You have arrived at the target stop Zürich, Stampfenbachplatz (8591379) before the target time of 2020-05-24T17:30.\n", "\n", "\n", " ---------- OPTION 2\n", "Journey begins at stop Zürich, Tüffenwies (8591402) at time 2020-05-24T16:56:00.000000000, with an overall probability of success = 0.983739837398374 \n", "\n", " At stop Zürich, Tüffenwies (8591402), take route 1315 headed to Zürich, Albisgütli at time 2020-05-24T16:56:00.000000000.\n", " Get out at stop 602 at time [['2020-05-24T17:13:00.000000000' '2020-05-24T17:13:00.000000000']].\n", "Walk 334 seconds from stop Zürich, Bahnhofquai/HB (8587349) to stop Zürich, Stampfenbachplatz (8591379).\n", "You have arrived at the target stop Zürich, Stampfenbachplatz (8591379) before the target time of 2020-05-24T17:30.\n", "\n", "\n", " ---------- OPTION 3\n", "Journey begins at stop Zürich, Tüffenwies (8591402) at time 2020-05-24T16:54:00.000000000, with an overall probability of success = 0.9867006613632734 \n", "\n", " At stop Zürich, Tüffenwies (8591402), take route 342 headed to Zürich Altstetten, Bahnhof N at time 2020-05-24T16:54:00.000000000.\n", " Get out at stop 521 at time [['2020-05-24T16:57:00.000000000' 'NaT']].\n", "Walk 95 seconds from stop Zürich Altstetten, Bahnhof N (8591057) to stop Zürich Altstetten (8503001).\n", " At stop Zürich Altstetten (8503001), take route 130 headed to Pfäffikon ZH at time 2020-05-24T17:10:00.000000000.\n", " Get out at stop 1173 at time [['2020-05-24T17:15:00.000000000' '2020-05-24T17:19:00.000000000']].\n", "Walk 323 seconds from stop Zürich HB (8503000) to stop Zürich, Stampfenbachplatz (8591379).\n", "You have arrived at the target stop Zürich, Stampfenbachplatz (8591379) before the target time of 2020-05-24T17:30.\n", "\n", "\n", " ---------- OPTION 4\n", "Journey begins at stop Zürich, Tüffenwies (8591402) at time 2020-05-24T16:48:00.000000000, with an overall probability of success = 1.0 \n", "\n", " At stop Zürich, Tüffenwies (8591402), take route 1315 headed to Zürich, Albisgütli at time 2020-05-24T16:48:00.000000000.\n", " Get out at stop 1102 at time [['2020-05-24T16:58:00.000000000' '2020-05-24T16:58:00.000000000']].\n", " At stop Zürich, Dammweg (8591110), take route 571 headed to Zürich, Albisgütli at time 2020-05-24T17:08:00.000000000.\n", " Get out at stop 880 at time [['2020-05-24T17:13:00.000000000' '2020-05-24T17:13:00.000000000']].\n", "Walk 470 seconds from stop Zürich, Sihlquai/HB (8591368) to stop Zürich, Stampfenbachplatz (8591379).\n", "You have arrived at the target stop Zürich, Stampfenbachplatz (8591379) before the target time of 2020-05-24T17:30.\n", "\n", "\n", " ---------- OPTION 5\n", "Journey begins at stop Zürich, Tüffenwies (8591402) at time 2020-05-24T16:45:00.000000000, with an overall probability of success = 0.9962311557788945 \n", "\n", " At stop Zürich, Tüffenwies (8591402), take route 108 headed to Zürich Oerlikon, Bahnhof Nord at time 2020-05-24T16:45:00.000000000.\n", " Get out at stop 1221 at time [['2020-05-24T16:53:00.000000000' '2020-05-24T16:53:00.000000000']].\n", " At stop Zürich, Meierhofplatz (8576240), take route 437 headed to Zürich, Bahnhofquai/HB at time 2020-05-24T17:03:00.000000000.\n", " Get out at stop 1033 at time [['2020-05-24T17:16:00.000000000' '2020-05-24T17:16:00.000000000']].\n", "You have arrived at the target stop Zürich, Stampfenbachplatz (8591379) before the target time of 2020-05-24T17:30.\n", "\n", "\n", " ---------- OPTION 6\n", "Journey begins at stop Zürich, Tüffenwies (8591402) at time 2020-05-24T16:42:10.000000000, with an overall probability of success = 0.991869918699187 \n", "\n", "Walk 590 seconds from stop Zürich, Tüffenwies (8591402) to stop Zürich, Grünaustrasse (8591167).\n", " At stop Zürich, Grünaustrasse (8591167), take route 1315 headed to Zürich, Albisgütli at time 2020-05-24T16:54:00.000000000.\n", " Get out at stop 880 at time [['2020-05-24T17:10:00.000000000' '2020-05-24T17:10:00.000000000']].\n", "Walk 470 seconds from stop Zürich, Sihlquai/HB (8591368) to stop Zürich, Stampfenbachplatz (8591379).\n", "You have arrived at the target stop Zürich, Stampfenbachplatz (8591379) before the target time of 2020-05-24T17:30.\n", "\n", "\n", " ---------- OPTION 7\n", "Journey begins at stop Zürich, Tüffenwies (8591402) at time 2020-05-24T16:39:00.000000000, with an overall probability of success = 0.9986795647137569 \n", "\n", " At stop Zürich, Tüffenwies (8591402), take route 342 headed to Zürich Altstetten, Bahnhof N at time 2020-05-24T16:39:00.000000000.\n", " Get out at stop 521 at time [['2020-05-24T16:42:00.000000000' 'NaT']].\n", "Walk 95 seconds from stop Zürich Altstetten, Bahnhof N (8591057) to stop Zürich Altstetten (8503001).\n", " At stop Zürich Altstetten (8503001), take route 446 headed to Zürich HB at time 2020-05-24T17:00:00.000000000.\n", " Get out at stop 1173 at time [['2020-05-24T17:06:00.000000000' 'NaT']].\n", "Walk 323 seconds from stop Zürich HB (8503000) to stop Zürich, Stampfenbachplatz (8591379).\n", "You have arrived at the target stop Zürich, Stampfenbachplatz (8591379) before the target time of 2020-05-24T17:30.\n", "\n", "\n", " ---------- OPTION 8\n", "Journey begins at stop Zürich, Tüffenwies (8591402) at time 2020-05-24T16:35:10.000000000, with an overall probability of success = 1.0 \n", "\n", "Walk 590 seconds from stop Zürich, Tüffenwies (8591402) to stop Zürich, Grünaustrasse (8591167).\n", " At stop Zürich, Grünaustrasse (8591167), take route 1315 headed to Zürich, Albisgütli at time 2020-05-24T16:47:00.000000000.\n", " Get out at stop 1102 at time [['2020-05-24T16:58:00.000000000' '2020-05-24T16:58:00.000000000']].\n", " At stop Zürich, Dammweg (8591110), take route 571 headed to Zürich, Albisgütli at time 2020-05-24T17:08:00.000000000.\n", " Get out at stop 880 at time [['2020-05-24T17:13:00.000000000' '2020-05-24T17:13:00.000000000']].\n", "Walk 470 seconds from stop Zürich, Sihlquai/HB (8591368) to stop Zürich, Stampfenbachplatz (8591379).\n", "You have arrived at the target stop Zürich, Stampfenbachplatz (8591379) before the target time of 2020-05-24T17:30.\n", "\n", "\n", " ---------- OPTION 9\n", "Journey begins at stop Zürich, Tüffenwies (8591402) at time 2020-05-24T16:18:00.000000000, with an overall probability of success = 1.0 \n", "\n", " At stop Zürich, Tüffenwies (8591402), take route 726 headed to Zürich, Bahnhofplatz/HB at time 2020-05-24T16:18:00.000000000.\n", " Get out at stop 1196 at time [['2020-05-24T16:37:00.000000000' 'NaT']].\n", "Walk 479 seconds from stop Zürich, Bahnhofplatz/HB (8587348) to stop Zürich, Stampfenbachplatz (8591379).\n", "You have arrived at the target stop Zürich, Stampfenbachplatz (8591379) before the target time of 2020-05-24T17:30.\n" ] } ], "source": [ "print_solutions(bags_p_s)" ] }, { "cell_type": "code", "execution_count": 50, "metadata": { "scrolled": true }, "outputs": [], "source": [ "for i, label in enumerate(bags[-1][p_s]):\n", " print('\\n'*2,'-'*10, 'OPTION', i+1)\n", " label.print_journey()" ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "n_stops: 11\n", "route_stops: [1007 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027]\n" ] }, { "data": { "text/plain": [ "array([[ 'NaT', '2020-05-21T07:03:00.000000000'],\n", " ['2020-05-21T07:05:00.000000000', '2020-05-21T07:05:00.000000000'],\n", " ['2020-05-21T07:06:00.000000000', '2020-05-21T07:06:00.000000000'],\n", " ['2020-05-21T07:08:00.000000000', '2020-05-21T07:08:00.000000000'],\n", " ['2020-05-21T07:09:00.000000000', '2020-05-21T07:09:00.000000000'],\n", " ['2020-05-21T07:10:00.000000000', '2020-05-21T07:10:00.000000000'],\n", " ['2020-05-21T07:11:00.000000000', '2020-05-21T07:11:00.000000000'],\n", " ['2020-05-21T07:12:00.000000000', '2020-05-21T07:12:00.000000000'],\n", " ['2020-05-21T07:14:00.000000000', '2020-05-21T07:14:00.000000000'],\n", " ['2020-05-21T07:15:00.000000000', '2020-05-21T07:15:00.000000000'],\n", " ['2020-05-21T07:16:00.000000000', '2020-05-21T07:16:00.000000000'],\n", " ['2020-05-21T07:17:00.000000000', '2020-05-21T07:17:00.000000000'],\n", " ['2020-05-21T07:18:00.000000000', '2020-05-21T07:18:00.000000000'],\n", " ['2020-05-21T07:19:00.000000000', '2020-05-21T07:19:00.000000000'],\n", " ['2020-05-21T07:20:00.000000000', '2020-05-21T07:20:00.000000000'],\n", " ['2020-05-21T07:21:00.000000000', '2020-05-21T07:21:00.000000000'],\n", " ['2020-05-21T07:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T07:33:00.000000000'],\n", " ['2020-05-21T07:35:00.000000000', '2020-05-21T07:35:00.000000000'],\n", " ['2020-05-21T07:36:00.000000000', '2020-05-21T07:36:00.000000000'],\n", " ['2020-05-21T07:38:00.000000000', '2020-05-21T07:38:00.000000000'],\n", " ['2020-05-21T07:39:00.000000000', '2020-05-21T07:39:00.000000000'],\n", " ['2020-05-21T07:40:00.000000000', '2020-05-21T07:40:00.000000000'],\n", " ['2020-05-21T07:41:00.000000000', '2020-05-21T07:41:00.000000000'],\n", " ['2020-05-21T07:42:00.000000000', '2020-05-21T07:42:00.000000000'],\n", " ['2020-05-21T07:44:00.000000000', '2020-05-21T07:44:00.000000000'],\n", " ['2020-05-21T07:45:00.000000000', '2020-05-21T07:45:00.000000000'],\n", " ['2020-05-21T07:46:00.000000000', '2020-05-21T07:46:00.000000000'],\n", " ['2020-05-21T07:47:00.000000000', '2020-05-21T07:47:00.000000000'],\n", " ['2020-05-21T07:48:00.000000000', '2020-05-21T07:48:00.000000000'],\n", " ['2020-05-21T07:49:00.000000000', '2020-05-21T07:49:00.000000000'],\n", " ['2020-05-21T07:50:00.000000000', '2020-05-21T07:50:00.000000000'],\n", " ['2020-05-21T07:51:00.000000000', '2020-05-21T07:51:00.000000000'],\n", " ['2020-05-21T07:52:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T08:03:00.000000000'],\n", " ['2020-05-21T08:05:00.000000000', '2020-05-21T08:05:00.000000000'],\n", " ['2020-05-21T08:06:00.000000000', '2020-05-21T08:06:00.000000000'],\n", " ['2020-05-21T08:08:00.000000000', '2020-05-21T08:08:00.000000000'],\n", " ['2020-05-21T08:09:00.000000000', '2020-05-21T08:09:00.000000000'],\n", " ['2020-05-21T08:10:00.000000000', '2020-05-21T08:10:00.000000000'],\n", " ['2020-05-21T08:11:00.000000000', '2020-05-21T08:11:00.000000000'],\n", " ['2020-05-21T08:12:00.000000000', '2020-05-21T08:12:00.000000000'],\n", " ['2020-05-21T08:14:00.000000000', '2020-05-21T08:14:00.000000000'],\n", " ['2020-05-21T08:15:00.000000000', '2020-05-21T08:15:00.000000000'],\n", " ['2020-05-21T08:16:00.000000000', '2020-05-21T08:16:00.000000000'],\n", " ['2020-05-21T08:17:00.000000000', '2020-05-21T08:17:00.000000000'],\n", " ['2020-05-21T08:18:00.000000000', '2020-05-21T08:18:00.000000000'],\n", " ['2020-05-21T08:19:00.000000000', '2020-05-21T08:19:00.000000000'],\n", " ['2020-05-21T08:20:00.000000000', '2020-05-21T08:20:00.000000000'],\n", " ['2020-05-21T08:21:00.000000000', '2020-05-21T08:21:00.000000000'],\n", " ['2020-05-21T08:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T08:33:00.000000000'],\n", " ['2020-05-21T08:35:00.000000000', '2020-05-21T08:35:00.000000000'],\n", " ['2020-05-21T08:36:00.000000000', '2020-05-21T08:36:00.000000000'],\n", " ['2020-05-21T08:38:00.000000000', '2020-05-21T08:38:00.000000000'],\n", " ['2020-05-21T08:39:00.000000000', '2020-05-21T08:39:00.000000000'],\n", " ['2020-05-21T08:40:00.000000000', '2020-05-21T08:40:00.000000000'],\n", " ['2020-05-21T08:41:00.000000000', '2020-05-21T08:41:00.000000000'],\n", " ['2020-05-21T08:42:00.000000000', '2020-05-21T08:42:00.000000000'],\n", " ['2020-05-21T08:44:00.000000000', '2020-05-21T08:44:00.000000000'],\n", " ['2020-05-21T08:45:00.000000000', '2020-05-21T08:45:00.000000000'],\n", " ['2020-05-21T08:46:00.000000000', '2020-05-21T08:46:00.000000000'],\n", " ['2020-05-21T08:47:00.000000000', '2020-05-21T08:47:00.000000000'],\n", " ['2020-05-21T08:48:00.000000000', '2020-05-21T08:48:00.000000000'],\n", " ['2020-05-21T08:49:00.000000000', '2020-05-21T08:49:00.000000000'],\n", " ['2020-05-21T08:50:00.000000000', '2020-05-21T08:50:00.000000000'],\n", " ['2020-05-21T08:51:00.000000000', '2020-05-21T08:51:00.000000000'],\n", " ['2020-05-21T08:52:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T09:03:00.000000000'],\n", " ['2020-05-21T09:05:00.000000000', '2020-05-21T09:05:00.000000000'],\n", " ['2020-05-21T09:06:00.000000000', '2020-05-21T09:06:00.000000000'],\n", " ['2020-05-21T09:08:00.000000000', '2020-05-21T09:08:00.000000000'],\n", " ['2020-05-21T09:09:00.000000000', '2020-05-21T09:09:00.000000000'],\n", " ['2020-05-21T09:10:00.000000000', '2020-05-21T09:10:00.000000000'],\n", " ['2020-05-21T09:11:00.000000000', '2020-05-21T09:11:00.000000000'],\n", " ['2020-05-21T09:12:00.000000000', '2020-05-21T09:12:00.000000000'],\n", " ['2020-05-21T09:14:00.000000000', '2020-05-21T09:14:00.000000000'],\n", " ['2020-05-21T09:15:00.000000000', '2020-05-21T09:15:00.000000000'],\n", " ['2020-05-21T09:16:00.000000000', '2020-05-21T09:16:00.000000000'],\n", " ['2020-05-21T09:17:00.000000000', '2020-05-21T09:17:00.000000000'],\n", " ['2020-05-21T09:18:00.000000000', '2020-05-21T09:18:00.000000000'],\n", " ['2020-05-21T09:19:00.000000000', '2020-05-21T09:19:00.000000000'],\n", " ['2020-05-21T09:20:00.000000000', '2020-05-21T09:20:00.000000000'],\n", " ['2020-05-21T09:21:00.000000000', '2020-05-21T09:21:00.000000000'],\n", " ['2020-05-21T09:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T10:03:00.000000000'],\n", " ['2020-05-21T10:05:00.000000000', '2020-05-21T10:05:00.000000000'],\n", " ['2020-05-21T10:06:00.000000000', '2020-05-21T10:06:00.000000000'],\n", " ['2020-05-21T10:08:00.000000000', '2020-05-21T10:08:00.000000000'],\n", " ['2020-05-21T10:09:00.000000000', '2020-05-21T10:09:00.000000000'],\n", " ['2020-05-21T10:10:00.000000000', '2020-05-21T10:10:00.000000000'],\n", " ['2020-05-21T10:11:00.000000000', '2020-05-21T10:11:00.000000000'],\n", " ['2020-05-21T10:12:00.000000000', '2020-05-21T10:12:00.000000000'],\n", " ['2020-05-21T10:14:00.000000000', '2020-05-21T10:14:00.000000000'],\n", " ['2020-05-21T10:15:00.000000000', '2020-05-21T10:15:00.000000000'],\n", " ['2020-05-21T10:16:00.000000000', '2020-05-21T10:16:00.000000000'],\n", " ['2020-05-21T10:17:00.000000000', '2020-05-21T10:17:00.000000000'],\n", " ['2020-05-21T10:18:00.000000000', '2020-05-21T10:18:00.000000000'],\n", " ['2020-05-21T10:19:00.000000000', '2020-05-21T10:19:00.000000000'],\n", " ['2020-05-21T10:20:00.000000000', '2020-05-21T10:20:00.000000000'],\n", " ['2020-05-21T10:21:00.000000000', '2020-05-21T10:21:00.000000000'],\n", " ['2020-05-21T10:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T11:03:00.000000000'],\n", " ['2020-05-21T11:05:00.000000000', '2020-05-21T11:05:00.000000000'],\n", " ['2020-05-21T11:06:00.000000000', '2020-05-21T11:06:00.000000000'],\n", " ['2020-05-21T11:08:00.000000000', '2020-05-21T11:08:00.000000000'],\n", " ['2020-05-21T11:09:00.000000000', '2020-05-21T11:09:00.000000000'],\n", " ['2020-05-21T11:10:00.000000000', '2020-05-21T11:10:00.000000000'],\n", " ['2020-05-21T11:11:00.000000000', '2020-05-21T11:11:00.000000000'],\n", " ['2020-05-21T11:12:00.000000000', '2020-05-21T11:12:00.000000000'],\n", " ['2020-05-21T11:14:00.000000000', '2020-05-21T11:14:00.000000000'],\n", " ['2020-05-21T11:15:00.000000000', '2020-05-21T11:15:00.000000000'],\n", " ['2020-05-21T11:16:00.000000000', '2020-05-21T11:16:00.000000000'],\n", " ['2020-05-21T11:17:00.000000000', '2020-05-21T11:17:00.000000000'],\n", " ['2020-05-21T11:18:00.000000000', '2020-05-21T11:18:00.000000000'],\n", " ['2020-05-21T11:19:00.000000000', '2020-05-21T11:19:00.000000000'],\n", " ['2020-05-21T11:20:00.000000000', '2020-05-21T11:20:00.000000000'],\n", " ['2020-05-21T11:21:00.000000000', '2020-05-21T11:21:00.000000000'],\n", " ['2020-05-21T11:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T12:03:00.000000000'],\n", " ['2020-05-21T12:05:00.000000000', '2020-05-21T12:05:00.000000000'],\n", " ['2020-05-21T12:06:00.000000000', '2020-05-21T12:06:00.000000000'],\n", " ['2020-05-21T12:08:00.000000000', '2020-05-21T12:08:00.000000000'],\n", " ['2020-05-21T12:09:00.000000000', '2020-05-21T12:09:00.000000000'],\n", " ['2020-05-21T12:10:00.000000000', '2020-05-21T12:10:00.000000000'],\n", " ['2020-05-21T12:11:00.000000000', '2020-05-21T12:11:00.000000000'],\n", " ['2020-05-21T12:12:00.000000000', '2020-05-21T12:12:00.000000000'],\n", " ['2020-05-21T12:14:00.000000000', '2020-05-21T12:14:00.000000000'],\n", " ['2020-05-21T12:15:00.000000000', '2020-05-21T12:15:00.000000000'],\n", " ['2020-05-21T12:16:00.000000000', '2020-05-21T12:16:00.000000000'],\n", " ['2020-05-21T12:17:00.000000000', '2020-05-21T12:17:00.000000000'],\n", " ['2020-05-21T12:18:00.000000000', '2020-05-21T12:18:00.000000000'],\n", " ['2020-05-21T12:19:00.000000000', '2020-05-21T12:19:00.000000000'],\n", " ['2020-05-21T12:20:00.000000000', '2020-05-21T12:20:00.000000000'],\n", " ['2020-05-21T12:21:00.000000000', '2020-05-21T12:21:00.000000000'],\n", " ['2020-05-21T12:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T13:03:00.000000000'],\n", " ['2020-05-21T13:05:00.000000000', '2020-05-21T13:05:00.000000000'],\n", " ['2020-05-21T13:06:00.000000000', '2020-05-21T13:06:00.000000000'],\n", " ['2020-05-21T13:08:00.000000000', '2020-05-21T13:08:00.000000000'],\n", " ['2020-05-21T13:09:00.000000000', '2020-05-21T13:09:00.000000000'],\n", " ['2020-05-21T13:10:00.000000000', '2020-05-21T13:10:00.000000000'],\n", " ['2020-05-21T13:11:00.000000000', '2020-05-21T13:11:00.000000000'],\n", " ['2020-05-21T13:12:00.000000000', '2020-05-21T13:12:00.000000000'],\n", " ['2020-05-21T13:14:00.000000000', '2020-05-21T13:14:00.000000000'],\n", " ['2020-05-21T13:15:00.000000000', '2020-05-21T13:15:00.000000000'],\n", " ['2020-05-21T13:16:00.000000000', '2020-05-21T13:16:00.000000000'],\n", " ['2020-05-21T13:17:00.000000000', '2020-05-21T13:17:00.000000000'],\n", " ['2020-05-21T13:18:00.000000000', '2020-05-21T13:18:00.000000000'],\n", " ['2020-05-21T13:19:00.000000000', '2020-05-21T13:19:00.000000000'],\n", " ['2020-05-21T13:20:00.000000000', '2020-05-21T13:20:00.000000000'],\n", " ['2020-05-21T13:21:00.000000000', '2020-05-21T13:21:00.000000000'],\n", " ['2020-05-21T13:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T14:03:00.000000000'],\n", " ['2020-05-21T14:05:00.000000000', '2020-05-21T14:05:00.000000000'],\n", " ['2020-05-21T14:06:00.000000000', '2020-05-21T14:06:00.000000000'],\n", " ['2020-05-21T14:08:00.000000000', '2020-05-21T14:08:00.000000000'],\n", " ['2020-05-21T14:09:00.000000000', '2020-05-21T14:09:00.000000000'],\n", " ['2020-05-21T14:10:00.000000000', '2020-05-21T14:10:00.000000000'],\n", " ['2020-05-21T14:11:00.000000000', '2020-05-21T14:11:00.000000000'],\n", " ['2020-05-21T14:12:00.000000000', '2020-05-21T14:12:00.000000000'],\n", " ['2020-05-21T14:14:00.000000000', '2020-05-21T14:14:00.000000000'],\n", " ['2020-05-21T14:15:00.000000000', '2020-05-21T14:15:00.000000000'],\n", " ['2020-05-21T14:16:00.000000000', '2020-05-21T14:16:00.000000000'],\n", " ['2020-05-21T14:17:00.000000000', '2020-05-21T14:17:00.000000000'],\n", " ['2020-05-21T14:18:00.000000000', '2020-05-21T14:18:00.000000000'],\n", " ['2020-05-21T14:19:00.000000000', '2020-05-21T14:19:00.000000000'],\n", " ['2020-05-21T14:20:00.000000000', '2020-05-21T14:20:00.000000000'],\n", " ['2020-05-21T14:21:00.000000000', '2020-05-21T14:21:00.000000000'],\n", " ['2020-05-21T14:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T15:03:00.000000000'],\n", " ['2020-05-21T15:05:00.000000000', '2020-05-21T15:05:00.000000000'],\n", " ['2020-05-21T15:06:00.000000000', '2020-05-21T15:06:00.000000000'],\n", " ['2020-05-21T15:08:00.000000000', '2020-05-21T15:08:00.000000000'],\n", " ['2020-05-21T15:09:00.000000000', '2020-05-21T15:09:00.000000000'],\n", " ['2020-05-21T15:10:00.000000000', '2020-05-21T15:10:00.000000000'],\n", " ['2020-05-21T15:11:00.000000000', '2020-05-21T15:11:00.000000000'],\n", " ['2020-05-21T15:12:00.000000000', '2020-05-21T15:12:00.000000000'],\n", " ['2020-05-21T15:14:00.000000000', '2020-05-21T15:14:00.000000000'],\n", " ['2020-05-21T15:15:00.000000000', '2020-05-21T15:15:00.000000000'],\n", " ['2020-05-21T15:16:00.000000000', '2020-05-21T15:16:00.000000000'],\n", " ['2020-05-21T15:17:00.000000000', '2020-05-21T15:17:00.000000000'],\n", " ['2020-05-21T15:18:00.000000000', '2020-05-21T15:18:00.000000000'],\n", " ['2020-05-21T15:19:00.000000000', '2020-05-21T15:19:00.000000000'],\n", " ['2020-05-21T15:20:00.000000000', '2020-05-21T15:20:00.000000000'],\n", " ['2020-05-21T15:21:00.000000000', '2020-05-21T15:21:00.000000000'],\n", " ['2020-05-21T15:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T16:03:00.000000000'],\n", " ['2020-05-21T16:05:00.000000000', '2020-05-21T16:05:00.000000000'],\n", " ['2020-05-21T16:06:00.000000000', '2020-05-21T16:06:00.000000000'],\n", " ['2020-05-21T16:08:00.000000000', '2020-05-21T16:08:00.000000000'],\n", " ['2020-05-21T16:09:00.000000000', '2020-05-21T16:09:00.000000000'],\n", " ['2020-05-21T16:10:00.000000000', '2020-05-21T16:10:00.000000000'],\n", " ['2020-05-21T16:11:00.000000000', '2020-05-21T16:11:00.000000000'],\n", " ['2020-05-21T16:12:00.000000000', '2020-05-21T16:12:00.000000000'],\n", " ['2020-05-21T16:14:00.000000000', '2020-05-21T16:14:00.000000000'],\n", " ['2020-05-21T16:15:00.000000000', '2020-05-21T16:15:00.000000000'],\n", " ['2020-05-21T16:16:00.000000000', '2020-05-21T16:16:00.000000000'],\n", " ['2020-05-21T16:17:00.000000000', '2020-05-21T16:17:00.000000000'],\n", " ['2020-05-21T16:18:00.000000000', '2020-05-21T16:18:00.000000000'],\n", " ['2020-05-21T16:19:00.000000000', '2020-05-21T16:19:00.000000000'],\n", " ['2020-05-21T16:20:00.000000000', '2020-05-21T16:20:00.000000000'],\n", " ['2020-05-21T16:21:00.000000000', '2020-05-21T16:21:00.000000000'],\n", " ['2020-05-21T16:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T16:33:00.000000000'],\n", " ['2020-05-21T16:35:00.000000000', '2020-05-21T16:35:00.000000000'],\n", " ['2020-05-21T16:36:00.000000000', '2020-05-21T16:36:00.000000000'],\n", " ['2020-05-21T16:38:00.000000000', '2020-05-21T16:38:00.000000000'],\n", " ['2020-05-21T16:39:00.000000000', '2020-05-21T16:39:00.000000000'],\n", " ['2020-05-21T16:40:00.000000000', '2020-05-21T16:40:00.000000000'],\n", " ['2020-05-21T16:41:00.000000000', '2020-05-21T16:41:00.000000000'],\n", " ['2020-05-21T16:42:00.000000000', '2020-05-21T16:42:00.000000000'],\n", " ['2020-05-21T16:44:00.000000000', '2020-05-21T16:44:00.000000000'],\n", " ['2020-05-21T16:45:00.000000000', '2020-05-21T16:45:00.000000000'],\n", " ['2020-05-21T16:46:00.000000000', '2020-05-21T16:46:00.000000000'],\n", " ['2020-05-21T16:47:00.000000000', '2020-05-21T16:47:00.000000000'],\n", " ['2020-05-21T16:48:00.000000000', '2020-05-21T16:48:00.000000000'],\n", " ['2020-05-21T16:49:00.000000000', '2020-05-21T16:49:00.000000000'],\n", " ['2020-05-21T16:50:00.000000000', '2020-05-21T16:50:00.000000000'],\n", " ['2020-05-21T16:51:00.000000000', '2020-05-21T16:51:00.000000000'],\n", " ['2020-05-21T16:52:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T17:03:00.000000000'],\n", " ['2020-05-21T17:05:00.000000000', '2020-05-21T17:05:00.000000000'],\n", " ['2020-05-21T17:06:00.000000000', '2020-05-21T17:06:00.000000000'],\n", " ['2020-05-21T17:08:00.000000000', '2020-05-21T17:08:00.000000000'],\n", " ['2020-05-21T17:09:00.000000000', '2020-05-21T17:09:00.000000000'],\n", " ['2020-05-21T17:10:00.000000000', '2020-05-21T17:10:00.000000000'],\n", " ['2020-05-21T17:11:00.000000000', '2020-05-21T17:11:00.000000000'],\n", " ['2020-05-21T17:12:00.000000000', '2020-05-21T17:12:00.000000000'],\n", " ['2020-05-21T17:14:00.000000000', '2020-05-21T17:14:00.000000000'],\n", " ['2020-05-21T17:15:00.000000000', '2020-05-21T17:15:00.000000000'],\n", " ['2020-05-21T17:16:00.000000000', '2020-05-21T17:16:00.000000000'],\n", " ['2020-05-21T17:17:00.000000000', '2020-05-21T17:17:00.000000000'],\n", " ['2020-05-21T17:18:00.000000000', '2020-05-21T17:18:00.000000000'],\n", " ['2020-05-21T17:19:00.000000000', '2020-05-21T17:19:00.000000000'],\n", " ['2020-05-21T17:20:00.000000000', '2020-05-21T17:20:00.000000000'],\n", " ['2020-05-21T17:21:00.000000000', '2020-05-21T17:21:00.000000000'],\n", " ['2020-05-21T17:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T17:33:00.000000000'],\n", " ['2020-05-21T17:35:00.000000000', '2020-05-21T17:35:00.000000000'],\n", " ['2020-05-21T17:36:00.000000000', '2020-05-21T17:36:00.000000000'],\n", " ['2020-05-21T17:38:00.000000000', '2020-05-21T17:38:00.000000000'],\n", " ['2020-05-21T17:39:00.000000000', '2020-05-21T17:39:00.000000000'],\n", " ['2020-05-21T17:40:00.000000000', '2020-05-21T17:40:00.000000000'],\n", " ['2020-05-21T17:41:00.000000000', '2020-05-21T17:41:00.000000000'],\n", " ['2020-05-21T17:42:00.000000000', '2020-05-21T17:42:00.000000000'],\n", " ['2020-05-21T17:44:00.000000000', '2020-05-21T17:44:00.000000000'],\n", " ['2020-05-21T17:45:00.000000000', '2020-05-21T17:45:00.000000000'],\n", " ['2020-05-21T17:46:00.000000000', '2020-05-21T17:46:00.000000000'],\n", " ['2020-05-21T17:47:00.000000000', '2020-05-21T17:47:00.000000000'],\n", " ['2020-05-21T17:48:00.000000000', '2020-05-21T17:48:00.000000000'],\n", " ['2020-05-21T17:49:00.000000000', '2020-05-21T17:49:00.000000000'],\n", " ['2020-05-21T17:50:00.000000000', '2020-05-21T17:50:00.000000000'],\n", " ['2020-05-21T17:51:00.000000000', '2020-05-21T17:51:00.000000000'],\n", " ['2020-05-21T17:52:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T18:03:00.000000000'],\n", " ['2020-05-21T18:05:00.000000000', '2020-05-21T18:05:00.000000000'],\n", " ['2020-05-21T18:06:00.000000000', '2020-05-21T18:06:00.000000000'],\n", " ['2020-05-21T18:08:00.000000000', '2020-05-21T18:08:00.000000000'],\n", " ['2020-05-21T18:09:00.000000000', '2020-05-21T18:09:00.000000000'],\n", " ['2020-05-21T18:10:00.000000000', '2020-05-21T18:10:00.000000000'],\n", " ['2020-05-21T18:11:00.000000000', '2020-05-21T18:11:00.000000000'],\n", " ['2020-05-21T18:12:00.000000000', '2020-05-21T18:12:00.000000000'],\n", " ['2020-05-21T18:14:00.000000000', '2020-05-21T18:14:00.000000000'],\n", " ['2020-05-21T18:15:00.000000000', '2020-05-21T18:15:00.000000000'],\n", " ['2020-05-21T18:16:00.000000000', '2020-05-21T18:16:00.000000000'],\n", " ['2020-05-21T18:17:00.000000000', '2020-05-21T18:17:00.000000000'],\n", " ['2020-05-21T18:18:00.000000000', '2020-05-21T18:18:00.000000000'],\n", " ['2020-05-21T18:19:00.000000000', '2020-05-21T18:19:00.000000000'],\n", " ['2020-05-21T18:20:00.000000000', '2020-05-21T18:20:00.000000000'],\n", " ['2020-05-21T18:21:00.000000000', '2020-05-21T18:21:00.000000000'],\n", " ['2020-05-21T18:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T18:33:00.000000000'],\n", " ['2020-05-21T18:35:00.000000000', '2020-05-21T18:35:00.000000000'],\n", " ['2020-05-21T18:36:00.000000000', '2020-05-21T18:36:00.000000000'],\n", " ['2020-05-21T18:38:00.000000000', '2020-05-21T18:38:00.000000000'],\n", " ['2020-05-21T18:39:00.000000000', '2020-05-21T18:39:00.000000000'],\n", " ['2020-05-21T18:40:00.000000000', '2020-05-21T18:40:00.000000000'],\n", " ['2020-05-21T18:41:00.000000000', '2020-05-21T18:41:00.000000000'],\n", " ['2020-05-21T18:42:00.000000000', '2020-05-21T18:42:00.000000000'],\n", " ['2020-05-21T18:44:00.000000000', '2020-05-21T18:44:00.000000000'],\n", " ['2020-05-21T18:45:00.000000000', '2020-05-21T18:45:00.000000000'],\n", " ['2020-05-21T18:46:00.000000000', '2020-05-21T18:46:00.000000000'],\n", " ['2020-05-21T18:47:00.000000000', '2020-05-21T18:47:00.000000000'],\n", " ['2020-05-21T18:48:00.000000000', '2020-05-21T18:48:00.000000000'],\n", " ['2020-05-21T18:49:00.000000000', '2020-05-21T18:49:00.000000000'],\n", " ['2020-05-21T18:50:00.000000000', '2020-05-21T18:50:00.000000000'],\n", " ['2020-05-21T18:51:00.000000000', '2020-05-21T18:51:00.000000000'],\n", " ['2020-05-21T18:52:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T19:03:00.000000000'],\n", " ['2020-05-21T19:05:00.000000000', '2020-05-21T19:05:00.000000000'],\n", " ['2020-05-21T19:06:00.000000000', '2020-05-21T19:06:00.000000000'],\n", " ['2020-05-21T19:08:00.000000000', '2020-05-21T19:08:00.000000000'],\n", " ['2020-05-21T19:09:00.000000000', '2020-05-21T19:09:00.000000000'],\n", " ['2020-05-21T19:10:00.000000000', '2020-05-21T19:10:00.000000000'],\n", " ['2020-05-21T19:11:00.000000000', '2020-05-21T19:11:00.000000000'],\n", " ['2020-05-21T19:12:00.000000000', '2020-05-21T19:12:00.000000000'],\n", " ['2020-05-21T19:14:00.000000000', '2020-05-21T19:14:00.000000000'],\n", " ['2020-05-21T19:15:00.000000000', '2020-05-21T19:15:00.000000000'],\n", " ['2020-05-21T19:16:00.000000000', '2020-05-21T19:16:00.000000000'],\n", " ['2020-05-21T19:17:00.000000000', '2020-05-21T19:17:00.000000000'],\n", " ['2020-05-21T19:18:00.000000000', '2020-05-21T19:18:00.000000000'],\n", " ['2020-05-21T19:19:00.000000000', '2020-05-21T19:19:00.000000000'],\n", " ['2020-05-21T19:20:00.000000000', '2020-05-21T19:20:00.000000000'],\n", " ['2020-05-21T19:21:00.000000000', '2020-05-21T19:21:00.000000000'],\n", " ['2020-05-21T19:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T19:33:00.000000000'],\n", " ['2020-05-21T19:35:00.000000000', '2020-05-21T19:35:00.000000000'],\n", " ['2020-05-21T19:36:00.000000000', '2020-05-21T19:36:00.000000000'],\n", " ['2020-05-21T19:38:00.000000000', '2020-05-21T19:38:00.000000000'],\n", " ['2020-05-21T19:39:00.000000000', '2020-05-21T19:39:00.000000000'],\n", " ['2020-05-21T19:40:00.000000000', '2020-05-21T19:40:00.000000000'],\n", " ['2020-05-21T19:41:00.000000000', '2020-05-21T19:41:00.000000000'],\n", " ['2020-05-21T19:42:00.000000000', '2020-05-21T19:42:00.000000000'],\n", " ['2020-05-21T19:44:00.000000000', '2020-05-21T19:44:00.000000000'],\n", " ['2020-05-21T19:45:00.000000000', '2020-05-21T19:45:00.000000000'],\n", " ['2020-05-21T19:46:00.000000000', '2020-05-21T19:46:00.000000000'],\n", " ['2020-05-21T19:47:00.000000000', '2020-05-21T19:47:00.000000000'],\n", " ['2020-05-21T19:48:00.000000000', '2020-05-21T19:48:00.000000000'],\n", " ['2020-05-21T19:49:00.000000000', '2020-05-21T19:49:00.000000000'],\n", " ['2020-05-21T19:50:00.000000000', '2020-05-21T19:50:00.000000000'],\n", " ['2020-05-21T19:51:00.000000000', '2020-05-21T19:51:00.000000000'],\n", " ['2020-05-21T19:52:00.000000000', 'NaT']],\n", " dtype='datetime64[ns]')" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "r = 382\n", "print('n_stops:', routes[r][1])\n", "print('route_stops:', get_stops(r))\n", "stopTimes[ routes[r][3] : routes[r+1][3] ]" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "5" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(bags_p_s)" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [], "source": [ "for i, label in enumerate(bags_p_s[2]):\n", " print('\\n'*2,'-'*10, 'OPTION', i, 'with at most 2 trips')\n", " label.print_journey()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Code for prototyping and debugging:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "iinfo(min=0, max=4294967295, dtype=uint32)" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.iinfo(np.uint32)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "ename": "RuntimeError", "evalue": "Set changed size during iteration", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mRuntimeError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0ms1\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m{\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m3\u001b[0m\u001b[0;34m}\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32mfor\u001b[0m \u001b[0mi\u001b[0m \u001b[0;32min\u001b[0m \u001b[0ms1\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 3\u001b[0m \u001b[0ms1\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0madd\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mi\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0;36m10\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0ms1\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mRuntimeError\u001b[0m: Set changed size during iteration" ] } ], "source": [ "s1 = {1,2,3}\n", "for i in s1:\n", " s1.add(i*10)\n", "s1" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([99., 80., 47., 22., 15., 11., 3., 0., 1., 1.]),\n", " array([1.0, 6.9, 12.8, 18.700000000000003, 24.6, 30.5, 36.400000000000006,\n", " 42.300000000000004, 48.2, 54.1, 60.0], dtype=object),\n", " )" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD4CAYAAAAXUaZHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAN3klEQVR4nO3df6jd9X3H8edrps5qtyaaS8gS3c0wVGTMH1ysYimd2YbVUv1DxFJGKIH8Yze7Ftq4wWT/KYxaB0MIapuBWJ3tlqCjrUstY38s7Y3aGpM6MxtrJJor03Xrxtqs7/1xvsJddqP3nO89npwPzwdczvl+vt9zvu83+fq6Xz/nfL83VYUkqS2/NOkCJEkrz3CXpAYZ7pLUIMNdkhpkuEtSg1ZNugCAtWvX1uzs7KTLkKSpsn///teramapdadFuM/OzjI/Pz/pMiRpqiR56VTrnJaRpAYZ7pLUIMNdkhr0juGe5IEkx5McWDR2bpInkrzQPa7pxpPkL5IcTvKDJJePs3hJ0tKWc+b+FeDak8Z2AHurajOwt1sG+CiwufvZDty7MmVKkobxjuFeVf8A/OtJwzcAu7rnu4AbF43/VQ38E7A6yfqVKlaStDyjzrmvq6pj3fNXgXXd8w3Ay4u2O9qNSZLeRb0/UK3BPYOHvm9wku1J5pPMLyws9C1DkrTIqOH+2lvTLd3j8W78FeD8Rdtt7Mb+n6raWVVzVTU3M7PkBVaSpBGNeoXqHmArcGf3uHvR+KeTfBX4IPBvi6ZvxmJ2x+PjfPu3deTO6ye2b0l6O+8Y7kkeAj4CrE1yFLiDQag/kmQb8BJwc7f53wHXAYeB/wQ+NYaaJUnv4B3Dvao+cYpVW5bYtoBb+xYlSerHK1QlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBhnuktQgw12SGmS4S1KDDHdJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBvcI9yR8leS7JgSQPJTkryaYk+5IcTvJwkjNXqlhJ0vKMHO5JNgB/CMxV1W8CZwC3AHcBd1fVhcAbwLaVKFSStHx9p2VWAe9Nsgo4GzgGXAM82q3fBdzYcx+SpCGtGvWFVfVKkj8Hfgz8F/AtYD/wZlWd6DY7CmxY6vVJtgPbAS644IJRy5io2R2PT2S/R+68fiL7lTQ9+kzLrAFuADYBvwacA1y73NdX1c6qmququZmZmVHLkCQtoc+0zO8AP6qqhar6OfB14GpgdTdNA7AReKVnjZKkIfUJ9x8DVyY5O0mALcBB4Engpm6brcDufiVKkoY1crhX1T4GH5w+BTzbvddO4AvAZ5McBs4D7l+BOiVJQxj5A1WAqroDuOOk4ReBK/q8rySpH69QlaQGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBhnuktQgw12SGmS4S1KDDHdJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBvUK9ySrkzya5IdJDiW5Ksm5SZ5I8kL3uGalipUkLU/fM/d7gG9U1UXAJcAhYAewt6o2A3u7ZUnSu2jkcE/yfuDDwP0AVfWzqnoTuAHY1W22C7ixb5GSpOH0OXPfBCwAX07ydJL7kpwDrKuqY902rwLrlnpxku1J5pPMLyws9ChDknSyPuG+CrgcuLeqLgN+yklTMFVVQC314qraWVVzVTU3MzPTowxJ0sn6hPtR4GhV7euWH2UQ9q8lWQ/QPR7vV6IkaVgjh3tVvQq8nOQD3dAW4CCwB9jajW0FdveqUJI0tFU9X/8HwINJzgReBD7F4BfGI0m2AS8BN/fchyRpSL3CvaqeAeaWWLWlz/tKkvrxClVJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBq2adAEa3uyOxye27yN3Xj+xfUtaPs/cJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBvUO9yRnJHk6yWPd8qYk+5IcTvJwkjP7lylJGsZKnLnfBhxatHwXcHdVXQi8AWxbgX1IkobQK9yTbASuB+7rlgNcAzzabbILuLHPPiRJw+t75v4l4PPAL7rl84A3q+pEt3wU2LDUC5NsTzKfZH5hYaFnGZKkxUYO9yQfA45X1f5RXl9VO6tqrqrmZmZmRi1DkrSEPn+s42rg40muA84CfhW4B1idZFV39r4ReKV/mZKkYYx85l5Vt1fVxqqaBW4Bvl1VnwSeBG7qNtsK7O5dpSRpKOP4nvsXgM8mOcxgDv7+MexDkvQ2VuRvqFbVd4DvdM9fBK5YifeVJI3GK1QlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBhnuktQgw12SGmS4S1KDDHdJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDVo16QI0XWZ3PD6R/R658/qJ7FeaViOfuSc5P8mTSQ4meS7Jbd34uUmeSPJC97hm5cqVJC1Hn2mZE8Dnqupi4Erg1iQXAzuAvVW1GdjbLUuS3kUjh3tVHauqp7rn/w4cAjYANwC7us12ATf2LVKSNJwV+UA1ySxwGbAPWFdVx7pVrwLrTvGa7Unmk8wvLCysRBmSpE7vcE/yPuBrwGeq6ieL11VVAbXU66pqZ1XNVdXczMxM3zIkSYv0Cvck72EQ7A9W1de74deSrO/WrweO9ytRkjSsPt+WCXA/cKiqvrho1R5ga/d8K7B79PIkSaPo8z33q4HfB55N8kw39sfAncAjSbYBLwE39ytRkjSskcO9qv4RyClWbxn1fSVJ/Xn7AUlqkOEuSQ0y3CWpQYa7JDXIu0JqKng3Smk4nrlLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBhnuktQgw12SGuRFTNLbmNTFU+AFVOrHM3dJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoP8Yx3SaWpSfyjEPxLSBs/cJalBhrskNWgs4Z7k2iTPJzmcZMc49iFJOrUVn3NPcgbwl8DvAkeB7yXZU1UHV3pfktoyyT9IPinj+oxjHGfuVwCHq+rFqvoZ8FXghjHsR5J0CuP4tswG4OVFy0eBD568UZLtwPZu8T+SPL+M914LvN67wtNHS/201Au01c9QveSuMVayMlr6tyF39ern10+1YmJfhayqncDOYV6TZL6q5sZU0ruupX5a6gXa6qelXsB+lmsc0zKvAOcvWt7YjUmS3iXjCPfvAZuTbEpyJnALsGcM+5EkncKKT8tU1Ykknwa+CZwBPFBVz63Q2w81jTMFWuqnpV6grX5a6gXsZ1lSVeN4X0nSBHmFqiQ1yHCXpAZNTbhP+y0NkjyQ5HiSA4vGzk3yRJIXusc1k6xxuZKcn+TJJAeTPJfktm586vpJclaS7yb5ftfLn3Xjm5Ls6463h7svB0yNJGckeTrJY93y1PaT5EiSZ5M8k2S+G5u6Yw0gyeokjyb5YZJDSa4aVy9TEe6LbmnwUeBi4BNJLp5sVUP7CnDtSWM7gL1VtRnY2y1PgxPA56rqYuBK4Nbu32Ma+/lv4JqqugS4FLg2yZXAXcDdVXUh8AawbYI1juI24NCi5Wnv57er6tJF3wefxmMN4B7gG1V1EXAJg3+j8fRSVaf9D3AV8M1Fy7cDt0+6rhH6mAUOLFp+HljfPV8PPD/pGkfsazeDewlNdT/A2cBTDK6ofh1Y1Y3/n+PvdP9hcG3JXuAa4DEgU97PEWDtSWNTd6wB7wd+RPdFlnH3MhVn7ix9S4MNE6plJa2rqmPd81eBdZMsZhRJZoHLgH1MaT/dFMYzwHHgCeBfgDer6kS3ybQdb18CPg/8ols+j+nup4BvJdnf3bYEpvNY2wQsAF/upszuS3IOY+plWsK9eTX4tT1V30tN8j7ga8Bnquoni9dNUz9V9T9VdSmDM94rgIsmXNLIknwMOF5V+yddywr6UFVdzmBa9tYkH168coqOtVXA5cC9VXUZ8FNOmoJZyV6mJdxbvaXBa0nWA3SPxydcz7IleQ+DYH+wqr7eDU9tPwBV9SbwJINpi9VJ3rrIb5qOt6uBjyc5wuCOrNcwmOed1n6oqle6x+PA3zD4BTyNx9pR4GhV7euWH2UQ9mPpZVrCvdVbGuwBtnbPtzKYuz7tJQlwP3Coqr64aNXU9ZNkJsnq7vl7GXx2cIhByN/UbTYVvQBU1e1VtbGqZhn8d/LtqvokU9pPknOS/Mpbz4HfAw4whcdaVb0KvJzkA93QFuAg4+pl0h8yDPFhxHXAPzOYD/2TSdczQv0PAceAnzP4Db6NwVzoXuAF4O+Bcydd5zJ7+RCD/3X8AfBM93PdNPYD/BbwdNfLAeBPu/HfAL4LHAb+GvjlSdc6Qm8fAR6b5n66ur/f/Tz31n/703isdXVfCsx3x9vfAmvG1Yu3H5CkBk3LtIwkaQiGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWrQ/wJ3eeD1vPXiQgAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "import matplotlib.pyplot as plt\n", "# Plot distribution of n_stops\n", "plt.hist(routes[:,1])" ] }, { "cell_type": "code", "execution_count": 95, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[0]" ] }, "execution_count": 95, "metadata": {}, "output_type": "execute_result" } ], "source": [ "list(range(0,-1,-1))" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[[], [<__main__.RouteLabel at 0x7fd99af341d0>]],\n", " [[], [<__main__.RouteLabel at 0x7fd99af341d0>]]]" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "l = [[[RouteLabel(1,1,0,1,TargetLabel(p_t, tau_0),1)] for _ in range(2)]]\n", "l.append(l[-1].copy())\n", "l[1][0].remove(l[1][0][0])\n", "l" ] }, { "cell_type": "code", "execution_count": 49, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "None\n" ] }, { "data": { "text/plain": [ "[0, 1, 3, 4, 5]" ] }, "execution_count": 49, "metadata": {}, "output_type": "execute_result" } ], "source": [ "l = list(range(6))\n", "ret = l.remove(2)\n", "print(ret)\n", "l" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Departure at 2020-05-11T08:00 from stop 0.\n", " Departure at 2020-05-11T08:10 from stop 0.\n" ] } ], "source": [ "B = [RouteLabel(1,1,0,0,TargetLabel(p_t, tau_0),0.8), RouteLabel(1,1,0,1,TargetLabel(p_t, tau_0),1)]\n", "B[0].update_stop(0)\n", "B[1].update_stop(0)\n", "for l in B:\n", " l.pprint()" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Departure at 2020-05-11T08:20 from stop 0.\n", "True\n", " Departure at 2020-05-11T08:10 from stop 0.\n", " Departure at 2020-05-11T08:20 from stop 555.\n", "----------\n", " Departure at 2020-05-11T08:20 from stop 555.\n" ] } ], "source": [ "label = RouteLabel(4,0, 2, 0, TargetLabel(p_t, tau_0), 0.9)\n", "label.update_stop(0)\n", "label.pprint()\n", "print(update_bag(B, label, 0))\n", "label.stop = 555\n", "for l in B:\n", " l.pprint()\n", "print('-'*10)\n", "label.pprint()" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[1, 2, 3], [1, 2, 666]]" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bags = [[1,2,3]]\n", "bags.append(bags[-1].copy())\n", "bags[1][2] = 666\n", "bags" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p_s = 0 # start stop = A\n", "p_t = 4 # target stop = E\n", "tau_0 = np.datetime64('2020-05-11T08:05') # departure time 08:05\n", "k_max = 10 # we set a maximum number of transports to pre-allocate memory for the numpy array tau_i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# initialization\n", "n_stops = len(stops)\n", "\n", "# earliest arrival time at each stop for each round.\n", "tau = np.full(shape=(k_max, n_stops), fill_value = np.datetime64('2100-01-01T00:00')) # 2100 instead of infinity # number of stops * max number of transports\n", "\n", "# earliest arrival time at each stop, indep. of round\n", "tau_star = np.full(shape=n_stops, fill_value = np.datetime64('2100-01-01T00:00'))\n", "\n", "marked = [p_s]\n", "q = []\n", "tau[0, p_s] = tau_0" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)[0][0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p_i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "t_r_dep = stopTimes[routes[r][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)[0][0] + \\\n", " routes[r][1]*t_r][1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 2) <\\\n", "np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 3):\n", " print(\"hello\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[routes[1][2] + np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 2)[0][0]:routes[1][2]+routes[1][1]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[routes[1][2] + np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 2)[0][0]:6]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[routes[1][2]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 2)[0][0]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if True and \\\n", " True:\n", " print(\"hello\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tau[0][0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "stopTimes[3][1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a = np.arange(1, 10)\n", "a" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a[1:10:2]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stopTimes[routes[0][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[0][2]:routes[0][2]+routes[0][1]] == 0)[0][0]:\\\n", " # end of the trips of r\n", " routes[0][3]+routes[0][0]*routes[0][1]:\\\n", " # we can jump from the number of stops in r to find the next departure of route r at p_i\n", " routes[0][1]\n", " ]\n", "# we may more simply loop through all trips, and stop as soon as the departure time is after the arrival time\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stopTimes[routes[0][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[0][2]:routes[0][2]+routes[0][1]] == 0)[0][0]][1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stopTimes[routes[1][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 3)[0][0] + \\\n", " routes[1][1]*1][1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# t_r is a trip that belongs to route r. t_r can take value 0 to routes[r][0]-1\n", "t = None\n", "r = 1\n", "tau_k_1 = tau[0][0]\n", "p_i = 3\n", "\n", "t_r = 0\n", "while True:\n", " \n", " t_r_dep = stopTimes[routes[r][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)[0][0] + \\\n", " routes[r][1]*t_r][1]\n", " \n", " if t_r_dep > tau_k_1:\n", " # retrieving the index of the departure time of the trip in stopTimes\n", " #t = routes[r][3] + t_r * routes[r][1]\n", " t = t_r\n", " break\n", " t_r += 1\n", " # we could not hop on any trip at this stop\n", " if t_r == routes[r][0]:\n", " break\n", " \n", "print(\"done\")\n", "print(t)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r = 1\n", "t = 1\n", "p_i = 2\n", "# 1st trip of route + offset for the right trip + offset for the right stop\n", "stopTimes[routes[r][3] + t * routes[r][1] + np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "d = []\n", "not d" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "r = 1\n", "t = 0\n", "p_i = 4\n", "arr_t_p_i = stopTimes[routes[r][3] + \\\n", " t * routes[r][1] + \\\n", " np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)[0][0]][0]\n", "arr_t_p_i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.datetime64('NaT') > np.datetime64('2100-01-01')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.datetime64('NaT') < np.datetime64('2100-01-01')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "jupytext": { "formats": "ipynb,md,py:percent" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 4 }