diff --git a/notebooks/MC_RAPTOR.ipynb b/notebooks/MC_RAPTOR.ipynb index 22fa9b5..7ada3ac 100644 --- a/notebooks/MC_RAPTOR.ipynb +++ b/notebooks/MC_RAPTOR.ipynb @@ -1,2417 +1,2441 @@ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Reversed MC RAPTOR\n", "\n", "## Left out at this stage:\n", "\n", "- Realistic time to get out of one transport and walk to the platform of the next. Instead, we just set it to 2 minutes, no matter what.\n", "\n", "## Encoding the data structures\n", "### General considerations\n", "We adhere to the data structures proposed by Delling et al. These structures aim to minimize read times in memory by making use of consecutive in-memory adresses. Thus, structures with varying dimensions (e.g dataframes, python lists) are excluded. We illustrate the difficulty with an example. \n", "\n", "Each route has a potentially unique number of stops. Therefore, we cannot store stops in a 2D array of routes by stops, as the number of stops is not the same for each route. We adress this problem by storing stops consecutively by route, and keeping track of the index of the first stop for each route.\n", "\n", "This general strategy is applied to all the required data structures, where possible.\n", "\n", "### routes\n", "The `routes` array will contain arrays `[n_trips, n_stops, pt_1st_stop, pt_1st_trip]` where all four values are `int`. To avoid overcomplicating things and try to mimic pointers in python, `pt_1st_stop` and `pt_1st_trip` contain integer indices." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "lines_to_next_cell": 0 }, "outputs": [], "source": [ "import numpy as np\n", "import pickle\n", "\n", "def pkload(path):\n", " with open(path, 'rb') as f:\n", " obj = pickle.load(f)\n", " return obj" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "lines_to_next_cell": 0 }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(1461, 4)\n" ] }, { "data": { "text/plain": [ "array([[ 1, 26, 0, 0],\n", " [ 1, 8, 26, 26],\n", " [ 1, 17, 34, 34],\n", " ...,\n", " [ 1, 3, 15362, 260396],\n", " [ 2, 16, 15365, 260399],\n", " [ 1, 28, 15381, 260431]], dtype=uint32)" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "routes = pkload(\"../data/routes_array_cyril.pkl\").astype(np.uint32)\n", "print(routes.shape)\n", "routes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### routeStops\n", "`routeStops` is an array that contains the ordered lists of stops for each route. `pt_1st_stop` in `routes` is required to get to the first stop of the route. is itself an array that contains the sequence of stops for route $r_i$." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(15409,)\n", "1406\n" ] }, { "data": { "text/plain": [ "array([1221, 816, 776, ..., 1349, 1037, 552], dtype=uint16)" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "routeStops = pkload(\"../data/route_stops_array_cyril.pkl\").astype(np.uint16)\n", "print(routeStops.shape)\n", "print(routeStops.max())\n", "routeStops" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### stopTimes\n", "\n", "The i-th entry in the `stopTimes` array is itself an array which contains the arrival and departure time at a particular stop for a particular trip. `stopTimes` is sorted by routes, and then by trips. We retrieve the index of the first (earliest) trip of the route with the pointer `pt_1st_trip` stored in `routes`. We may use the built-in `numpy` [date and time data structures](https://blog.finxter.com/how-to-work-with-dates-and-times-in-python/). In short, declaring dates and times is done like this: `np.datetime64('YYYY-MM-DDThh:mm')`. Entries with a `NaT` arrival or departure times correspond to beginning and end of trips respectively.\n", "\n", "Note that trips are indexed implicitely in stopTimes, but we decided to change a little bit from the paper and index them according to their parent route instead of giving them an absolute index. It makes things a bit easier when coding the algorithm." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(260459, 2)\n" ] }, { "data": { "text/plain": [ "array([[ 'NaT', '2020-05-24T07:00:00.000000000'],\n", " ['2020-05-24T07:01:00.000000000', '2020-05-24T07:01:00.000000000'],\n", " ['2020-05-24T07:02:00.000000000', '2020-05-24T07:02:00.000000000'],\n", " ...,\n", " ['2020-05-24T07:35:00.000000000', '2020-05-24T07:35:00.000000000'],\n", " ['2020-05-24T07:36:00.000000000', '2020-05-24T07:36:00.000000000'],\n", " ['2020-05-24T07:37:00.000000000', 'NaT']],\n", " dtype='datetime64[ns]')" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stopTimes = pkload(\"../data/stop_times_array_cyril.pkl\")\n", "print(stopTimes.shape)\n", "stopTimes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`NaT` is the `None` equivalent for `numpy datetime64`." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ True False]\n" ] } ], "source": [ "print(np.isnat(stopTimes[0]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### stopRoutes\n", "\n", "`stopRoutes` contains the routes (as `int`s representing an index in `routes`) associated with each stop. We need the pointer in `stops` to index `stopRoutes` correctly." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(15344,)\n" ] }, { "data": { "text/plain": [ "array([ 17, 116, 126, ..., 861, 982, 1087], dtype=uint32)" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stopRoutes = pkload(\"../data/stop_routes_array_cyril.pkl\").flatten().astype(np.uint32)\n", "print(stopRoutes.shape)\n", "stopRoutes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## transfers\n", "`transfers` is a 2D `np.ndarray` where each entry `[p_j, time]` represents (in seconds) the time it takes to walk from stop p_j to the implicitely given stop p_i.\n", "p_i is given implicitely by the indexing, in conjunction with `stops`. In other words:\n", "`transfers[stops[p_i][2]:stops[p_i][3]]` returns all the footpaths arriving at stop p_i.\n", "\n", "As we cannot store different data types in numpy arras, `time` will have to be converted to `np.timedelta64`, the format used to make differences between `np.datetime.64` variables. We will consider all `time` values as **positive values in seconds**." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(6264, 2)\n" ] }, { "data": { "text/plain": [ "array([[ 815, 267],\n", " [1350, 569],\n", " [ 63, 470],\n", " ...,\n", " [1113, 382],\n", " [1122, 338],\n", " [1270, 553]], dtype=uint16)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "transfers = pkload(\"../data/transfer_array_cyril.pkl\").astype(np.uint16)\n", "print(transfers.shape)\n", "transfers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## stops\n", "\n", "`stops` stores the indices in `stopRoutes` and in `transfers` corresponding to each stop.\n", "\n", "`stopRoutes[stops[p][0]:stops[p][1]]` returns the routes serving stop p.\n", "\n", "`transfers[stops[p][2]:stops[p][3]]` returns the footpaths arriving at stop p." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 0 75]\n", "[0 0]\n", "(1407, 2)\n", "[ 0 0 0 75]\n" ] }, { "data": { "text/plain": [ "array([[ 0, 11, 0, 2],\n", " [ 11, 20, 2, 7],\n", " [ 20, 38, 7, 22],\n", " ...,\n", " [15303, 15334, 6242, 6250],\n", " [15334, 15339, 6250, 6257],\n", " [15339, 15344, 6257, 6264]], dtype=uint32)" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stops = pkload(\"../data/stops_array_cyril.pkl\")\n", "print(np.isnan(stops.astype(np.float64)).sum(axis=0))\n", "print(np.equal(stops, None).sum(axis=0))\n", "print(stops.shape)\n", "stops = stops[:,[0,0,1,1]]\n", "# Make column 1 contain the start_index of the next stop in stopRoutes\n", "stops[:-1,1] = stops[1:,0]\n", "stops[-1, 1] = stopRoutes.shape[0]\n", "# Make column 2 contain the start_index of the next stop in stopRoutes\n", "for i in np.isnan(stops[:,2].astype(np.float64)).nonzero()[0]:\n", " stops[i,2] = stops[i-1,2]\n", "print(np.isnan(stops.astype(np.float64)).sum(axis=0))\n", "stops[:-1,3] = stops[1:,2]\n", "stops[-1, 3] = transfers.shape[0]\n", "# Convert to int\n", "stops = stops.astype(np.uint32)\n", "stops" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "routes serving stop 0: [ 17 116 126 144 169 250 267 356 573 617 1054]\n", "stops of route 17: [ 168 1365 504 434 715 454 1236 186 959 81 130 774 284 958\n", " 815 0 1350 265 780 305 490 16 180 1397]\n", "stops of route 116: [1397 180 16 490 305 780 265 1350 0 815 958 284 774 130\n", " 81 959 186 1236 454]\n", "stops of route 126: [ 0 1350 265 780 490 16 180 1397]\n", "stops of route 144: [1397 180 16 490 780 265 1350 0 815 958 284 774 130 81\n", " 959 186 1236 454 715 434 504 1365 168]\n", "stops of route 169: [ 0 1350 265 780 305 490 16 180 1397]\n", "stops of route 250: [ 0 815 958 284 774 130 81 959 186 1236 454 715 434 504\n", " 1365 168]\n", "stops of route 267: [1397 180 16 490 780 265 1350 0 815 958 284 774 130 81\n", " 959 186 1236 454]\n", "stops of route 356: [1397 180 16 490 305 780 265 1350 0 815 958 284 774 130\n", " 81 959 186 1236 454 715 434 504 1365 168]\n", "stops of route 573: [ 454 1236 186 959 81 130 774 284 958 815 0 1350 265 780\n", " 490 16 180 1397]\n", "stops of route 617: [ 454 1236 186 959 81 130 774 284 958 815 0 1350 265 780\n", " 305 490 16 180 1397]\n", "stops of route 1054: [ 168 1365 504 434 715 454 1236 186 959 81 130 774 284 958\n", " 815 0 1350 265 780 490 16 180 1397]\n", "stop 0 can be reached from stop 815 by walking for 267 seconds.\n", "stop 0 can be reached from stop 1350 by walking for 569 seconds.\n" ] } ], "source": [ "p = 0\n", "routes_serving_p = stopRoutes[stops[p][0]:stops[p][1]]\n", "print(\"routes serving stop 0:\", routes_serving_p)\n", "for r in routes_serving_p:\n", " print(\"stops of route {}:\".format(r), routeStops[routes[r][2]:routes[r][2]+routes[r][1]])\n", "for pPrime, walking_seconds in transfers[stops[p][2]:stops[p][3]]:\n", " print(\"stop {} can be reached from stop {} by walking for {} seconds.\".format(p, pPrime, walking_seconds))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Distribution of delays" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0.03314917, 0.88950276, 0.97237569, 0.98895028, 0.98895028,\n", " 0.98895028, 0.99447514, 0.99447514, 0.99447514, 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. ],\n", " [0. , 0.85082873, 0.95027624, 0.98895028, 0.98895028,\n", " 0.98895028, 0.98895028, 0.99447514, 0.99447514, 0.99447514,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. , 1. , 1. , 1. ,\n", " 1. , 1. ]])" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import gzip \n", "with gzip.open(\"../data/join_distribution_cumulative_p_3.pkl.gz\") as distrib_pkl:\n", " distrib_delays = pickle.load(distrib_pkl)\n", " \n", "distrib_delays[0:2]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Relate `stop_id`s and `trip_headsign`s to the integer indices used in the algorithm" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
route_idstop_id_generalstop_nametrip_headsignroute_intstop_introute_desc
026-13-j19-18576240Zürich, MeierhofplatzZürich, Albisgütli01221Tram
126-13-j19-18591353Zürich, SchwertZürich, Albisgütli0816Tram
226-13-j19-18591039Zürich, Alte TrotteZürich, Albisgütli0776Tram
326-13-j19-18591121Zürich, EschergutwegZürich, Albisgütli0307Tram
426-13-j19-18591417Zürich, WaidfusswegZürich, Albisgütli0347Tram
\n", "
" ], "text/plain": [ " route_id stop_id_general stop_name trip_headsign \\\n", "0 26-13-j19-1 8576240 Zürich, Meierhofplatz Zürich, Albisgütli \n", "1 26-13-j19-1 8591353 Zürich, Schwert Zürich, Albisgütli \n", "2 26-13-j19-1 8591039 Zürich, Alte Trotte Zürich, Albisgütli \n", "3 26-13-j19-1 8591121 Zürich, Eschergutweg Zürich, Albisgütli \n", "4 26-13-j19-1 8591417 Zürich, Waidfussweg Zürich, Albisgütli \n", "\n", " route_int stop_int route_desc \n", "0 0 1221 Tram \n", "1 0 816 Tram \n", "2 0 776 Tram \n", "3 0 307 Tram \n", "4 0 347 Tram " ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stop_times_df = pkload(\"../data/stop_times_df_cyril.pkl\")[['route_id', 'stop_id_general', 'stop_name', 'trip_headsign', 'route_int', 'stop_int', 'route_desc']]\n", "stop_times_df.head()" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[8502508 8503078 8503088 ... 8591173 8590772 8503509] ['Spreitenbach, Raiacker' 'Waldburg' 'Zürich HB SZU' ...\n", " 'Zürich, Haldenbach' 'Rüschlikon, Belvoir' 'Schlieren']\n" ] } ], "source": [ "stop_ids_names = stop_times_df[['stop_id_general', 'stop_int', 'stop_name']].drop_duplicates()\n", "assert np.all(stop_ids_names == stop_ids_names.drop_duplicates(subset='stop_int'))\n", "assert np.all(stop_ids_names == stop_ids_names.drop_duplicates(subset='stop_id_general'))\n", "assert np.all(stop_ids_names == stop_ids_names.drop_duplicates(subset='stop_name'))\n", "stop_ids_names = stop_ids_names.sort_values(by='stop_int')\n", "stop_ids = stop_ids_names['stop_id_general'].to_numpy()\n", "stop_names = stop_ids_names['stop_name'].to_numpy()\n", "print(stop_ids, stop_names)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "428 8576095 Uster, Stadtpark\n" + "1248 8591127 Zürich, Felsenrainstrasse\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", " \n", " \n", " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", " \n", " \n", "
route_idstop_id_generalstop_nametrip_headsignroute_intstop_introute_desc
575626-813-j19-18576095Uster, StadtparkUster, Bahnhof70428511926-768-j19-18591127Zürich, FelsenrainstrasseZürich Flughafen, Bahnhof521248Bus
576426-813-j19-18576095Uster, StadtparkUster, Bahnhof70428Bus520926-14-A-j19-18591127Zürich, FelsenrainstrasseZürich, Seebach581248Tram
\n", "
" ], "text/plain": [ - " route_id stop_id_general stop_name trip_headsign \\\n", - "5756 26-813-j19-1 8576095 Uster, Stadtpark Uster, Bahnhof \n", - "5764 26-813-j19-1 8576095 Uster, Stadtpark Uster, Bahnhof \n", + " route_id stop_id_general stop_name \\\n", + "5119 26-768-j19-1 8591127 Zürich, Felsenrainstrasse \n", + "5209 26-14-A-j19-1 8591127 Zürich, Felsenrainstrasse \n", "\n", - " route_int stop_int route_desc \n", - "5756 70 428 Bus \n", - "5764 70 428 Bus " + " trip_headsign route_int stop_int route_desc \n", + "5119 Zürich Flughafen, Bahnhof 52 1248 Bus \n", + "5209 Zürich, Seebach 58 1248 Tram " ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "p = np.random.randint(stops.shape[0])\n", "print(p, stop_ids[p], stop_names[p])\n", "stop_times_df[stop_times_df['stop_int'] == p].head(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Implementing the reversed Multiple Criteria RAPTOR\n", "\n", "Based on modified version of RAPTOR (reversed RAPTOR), we implement a multiple criteria RAPTOR algorithm.\n", "The optimization criteria are:\n", "- Latest departure\n", "- Highest probability of success of the entire trip\n", "- Lowest number of connections (implicit with the round-based approach)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "numpy.datetime64('2020-05-11T15:28')" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# absolute constants:\n", "\n", "tau_change_platform = np.timedelta64(2, 'm')\n", "np.datetime64('2020-05-11T15:30') - tau_change_platform" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "# helper functions\n", "\n", "def calc_stopTimes_idx(r, t, offset_p):\n", " \"\"\"Returns the index of the entry in stopTimes\n", " corresponding to the offset_p-th stop of the t-th trip\n", " of route r.\n", " \"\"\"\n", " return (routes[r][3] # 1st trip of route\n", " + t * routes[r][1] # offset for the right trip\n", " + offset_p # offset for the right stop\n", " )\n", "\n", "def get_arrival_time(r, t, offset_p):\n", " \"\"\"Returns 2000 (instead of 0) if t is None.\n", " Otherwise, returns the arrival time of the t-th trip of route r\n", " at the offset_p-th stop of route r.\n", " trips and stops of route r start at t=0, offset_p=0.\n", " \"\"\"\n", " if t is None:\n", " return np.datetime64('2000-01-01T01:00')\n", " \n", " return stopTimes[calc_stopTimes_idx(r,t,offset_p)][0] # 0 for arrival time\n", "\n", "def get_departure_time(r, t, offset_p):\n", " \"\"\"Throws TypeError if t is None.\n", " Otherwise, returns the departure time of the t-th trip of route r\n", " at the offset_p-th stop of route r.\n", " trips and stops of route r start at t=0 & offset_p=0.\n", " \"\"\"\n", " if t is None:\n", " raise TypeError(\"Requested departure time of None trip!\")\n", " \n", " return stopTimes[calc_stopTimes_idx(r,t,offset_p)][1] # 1 for departure time\n", "\n", "def get_stops(r):\n", " \"\"\"Returns the stops of route r\"\"\"\n", " idx_first_stop = routes[r][2]\n", " return routeStops[idx_first_stop:idx_first_stop+routes[r][1]] # n_stops = routes[r][1]\n", "\n", "def time2str(t):\n", " \"\"\"Prints the hour and minutes of np.datetime64 t.\"\"\"\n", " return str(t.astype('datetime64[m]')).split('T')[1]" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "class InstantiationException(Exception):\n", " pass\n", "\n", "class BaseLabel:\n", " \"\"\"An abstract base class for Labels. Do not instantiate.\n", " A label corresponds to a recursive (partial) solution, going\n", " to the target stop from the stop currently under consideration.\n", " \"\"\"\n", " indent = \" \"*4\n", " def __init__(self, stop, tau_dep, Pr, n_trips):\n", " self.stop = stop\n", " self.tau_dep = tau_dep\n", " self.Pr = Pr\n", " self.n_trips = n_trips\n", " \n", " def dominates(self, other):\n", " \"\"\"Returns True if self dominates other, else returns False.\n", " other: another Label instance.\n", " \"\"\"\n", " if self.tau_dep >= other.tau_dep \\\n", " and self.Pr >= other.Pr \\\n", " and self.n_trips <= other.n_trips :\n", " return True\n", " return False\n", " \n", " def stop2str(self):\n", " \"\"\"Returns a printable str representing self.stop\"\"\"\n", " return \"stop {stopN} ({stopI})\".format(\n", " stopN = stop_names[self.stop],\n", " stopI = stop_ids[self.stop]\n", " )\n", " \n", " def print_journey(self):\n", " print(\"Departure stop: \", self.stop2str())\n", " print(\"Departure time: \", time2str(self.tau_dep))\n", " print(\"Number of trips used: \", self.n_trips)\n", " print(\"Probability of success: {:.4f}\".format(self.Pr))\n", " \n", " self.print_instructions()\n", " \n", " def to_str(self):\n", " s = \"Departure at {0} from stop {1} (id: {2}, int: {3}).\".format(\n", " self.tau_dep,\n", " stop_names[self.stop],\n", " stop_ids[self.stop],\n", " self.stop\n", " )\n", " return repr(type(self)) + s\n", " \n", " def pprint(self, indent=''):\n", " print(indent, self.to_str())\n", " \n", " def copy(self):\n", " raise InstantiationException(\"class BaseLabel should never \"\n", " \"be instantiated.\"\n", " )\n", "\n", "class ImmutableLabel(BaseLabel):\n", " \"\"\"Base class for immutable Labels\"\"\"\n", " def copy(self):\n", " return self\n", "\n", "class TargetLabel(ImmutableLabel):\n", " \"\"\"A special type of label reserved for the target stop.\"\"\"\n", " def __init__(self, stop, tau_dep):\n", " BaseLabel.__init__(self, stop, tau_dep, 1., 0)\n", " \n", " def print_instructions(self):\n", " \"\"\"Finish printing instructions for the journey.\"\"\"\n", " print(\"Target stop: \", self.stop2str())\n", " print(\"Requested arrival time:\", time2str(self.tau_dep))\n", "\n", "class WalkLabel(ImmutableLabel):\n", " \"\"\"A special type of label for walking connections.\"\"\"\n", " def __init__(self, stop, tau_walk, next_label):\n", " \"\"\"Create a new WalkLabel instance.\n", " stop: stop where you start walking.\n", " tau_walk: (np.timedelta64) duration of the walk.\n", " next_label: label describing the rest of the trip after walking.\n", " \"\"\"\n", " if isinstance(next_label, WalkLabel):\n", " raise ValueError(\"Cannot chain two consecutive WalkLabels!\")\n", " tau_dep = next_label.tau_dep - tau_walk - tau_change_platform\n", " BaseLabel.__init__(self, stop, tau_dep, next_label.Pr, next_label.n_trips)\n", " self.tau_walk = tau_walk\n", " self.next_label = next_label\n", " \n", " def print_instructions(self):\n", " \"\"\"Recursively print instructions for the whole journey.\"\"\"\n", " print(self.indent + \"Walk {:.1f} minutes from\".format(\n", " self.tau_walk / np.timedelta64(1,'m')\n", " ),\n", " self.stop2str()\n", " )\n", " print(self.indent*2 + \"to\", self.next_label.stop2str())\n", " self.next_label.print_instructions()\n", "\n", "class RouteLabel(BaseLabel):\n", " \"\"\"A type of label for regular transports.\"\"\"\n", " def __init__(self,\n", " tau_dep,\n", " r,\n", " t,\n", " offset_p,\n", " next_label,\n", " Pr_connection_success\n", " ):\n", " \n", " self.tau_dep = tau_dep\n", " self.r = r\n", " self.t = t\n", " self.offset_p_in = offset_p\n", " self.offset_p_out = offset_p\n", " self.next_label = next_label\n", " # Store Pr_connection_success for self.copy()\n", " self.Pr_connection_success = Pr_connection_success\n", " \n", " self.route_stops = get_stops(self.r)\n", " self.stop = self.route_stops[self.offset_p_in]\n", " self.Pr = self.Pr_connection_success * self.next_label.Pr\n", " self.n_trips = self.next_label.n_trips + 1\n", " \n", " def update_stop(self, stop):\n", " self.stop = stop\n", " self.offset_p_in = self.offset_p_in - 1\n", " # Sanity check:\n", " assert self.offset_p_in >= 0\n", " assert self.route_stops[self.offset_p_in] == stop\n", " self.tau_dep = get_departure_time(self.r, self.t, self.offset_p_in)\n", " \n", " def print_instructions(self):\n", " \"\"\"Recursively print instructions for the whole journey.\"\"\"\n", " stopTimes_idx = calc_stopTimes_idx(self.r, self.t,\n", " self.offset_p_in)\n", " \n", " print(self.indent + \"At\", self.stop2str())\n", " print(self.indent*2 + \"take the\",\n", " stop_times_df['route_desc'][stopTimes_idx], \"to\",\n", " stop_times_df['trip_headsign'][stopTimes_idx]\n", " )\n", " print(self.indent*2 + \"leaving at\", time2str(self.tau_dep))\n", " print(self.indent*2 + \"(route id: {}).\".format(\n", " stop_times_df['route_id'][stopTimes_idx]\n", " ))\n", " \n", " tau_arr = get_arrival_time(\n", " self.r,\n", " self.t,\n", " self.offset_p_out\n", " )\n", " assert self.next_label.stop == self.route_stops[self.offset_p_out]\n", " \n", " print(self.indent + \"Get out at\", self.next_label.stop2str())\n", " print(self.indent*2 + \"at\", time2str(tau_arr)+\".\")\n", "\n", " self.next_label.print_instructions()\n", " \n", " def copy(self):\n", " \"\"\"When RouteLabels are merged into the bag of a stop,\n", " they must be copied (because they will subsequently\n", " be changed with self.update_stop()).\n", " \"\"\"\n", " l = RouteLabel(self.tau_dep,\n", " self.r,\n", " self.t,\n", " self.offset_p_in,\n", " self.next_label,\n", " self.Pr_connection_success\n", " )\n", " l.offset_p_out = self.offset_p_out\n", " return l" ] }, { "cell_type": "code", - "execution_count": 30, + "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "def run_mc_raptor(p_s, p_t, arrival_time, Pr_min\n", " , incoherences\n", " ):\n", " \"\"\"Run MC RAPTOR, using the data defined in cells above (stopRoutes etc.).\n", " Inputs:\n", " p_s: source stop\n", " p_t: target stop\n", " tau_0: latest acceptable arrival time. str (format: '14:56')\n", " Pr_min: minimum acceptable probability of success\n", " Output:\n", " bags_p_s: bags_p_s[k] contains the pareto set of non-dominated journeys\n", " from p_s to p_t that use at most k different trips (i.e,\n", " getting in at most k different vehicles), under the given\n", " constraints:\n", " 1. Each journey must succeed with a probability\n", " greater or equal to Pr_min.\n", " 2. The journey is a success if and only if all individual\n", " connections succeed, including the user's appointment\n", " in p_t at tau_0.\n", " 3. A connection succeeds if, and only if, the user reaches\n", " the platform before or on the scheduled departure time\n", " (allowing some time to change platforms)\n", " Non-dominated:\n", " A journey J1 is *dominated* by another journey J2, if\n", " J2 departs no earlier than J1 AND the probability of\n", " success of J2 is no less than that of J1.\n", " Pareto set:\n", " Each bag in bags_p_s contains only journeys that are not\n", " dominated by any other possible journey. Such a collection\n", " of non-dominated solutions is called a *Pareto set*.\n", " \n", " Each journey is represented as a Label that forms the start of a chain.\n", " The journey can be reconstructed by calling label.print_journey().\n", " \"\"\"\n", "# Input sanitization:\n", " tau_0 = np.datetime64('2020-05-24T'+arrival_time)\n", " \n", "# initialization\n", + " k_max = 10 # Maximum number of rounds\n", + " \n", " # For each route and for each label at each stop p, we will look at the n latest\n", " # trips until we find a trip for which the individual connection at stop p\n", " # succeeds with a probability at least equal to Pr_threshold.\n", " # Under some reasonable assumptions, setting Pr_threshold = Pr_min**(1/k)\n", " # guarantees that we will find a solution, if a solution exists involving at\n", " # most k connections (including the user's appointment in p_t at tau_0).\n", " Pr_threshold = Pr_min**(0.1)\n", " \n", - "\n", " # Initialize empty bags for each stop for round 0:\n", " n_stops = stops.shape[0]\n", " bags = [\n", " [\n", " [] # an empty bag\n", " for _ in range(n_stops)] # one empty bag per stop\n", " ]\n", "\n", " # Create a TargetLabel for p_t, and mark p_t\n", " bags[0][p_t].append(TargetLabel(p_t, tau_0))\n", " marked = {p_t}\n", "\n", "# Define bag operations (they depend on p_s and Pr_min for target pruning):\n", " def update_bag(bag, label, k):\n", " \"\"\"Add label to bag and remove dominated labels.\n", " bag is altered in-place.\n", "\n", " k: Round number, used for target pruning.\n", "\n", " returns: Boolean indicating whether bag was altered.\n", " \"\"\"\n", " # Apply the Pr_min constraint to label:\n", " if label.Pr < Pr_min:\n", " return False\n", "\n", " # Prune label if it is dominated by bags[k][p_s]:\n", " for L_star in bags[k][p_s]:\n", " if L_star.dominates(label):\n", " return False\n", "\n", " # Otherwise, merge label into bag1\n", " changed = False\n", " for L_old in bag:\n", " if L_old.dominates(label):\n", " return changed\n", " if label.dominates(L_old):\n", " bag.remove(L_old)\n", " changed = True\n", " bag.append(label.copy())\n", " return True\n", "\n", " def merge_bags(bag1, bag2, k):\n", " \"\"\"Merge bag2 into bag1 in-place.\n", " k: Round number, used for target pruning.\n", " returns: Boolean indicating whether bag was altered.\n", " \"\"\"\n", " changed = False\n", " for label in bag2:\n", " changed = changed or update_bag(bag1, label, k)\n", " return changed\n", " \n", " globals().update({'merge_bags': merge_bags})\n", " \n", "# Define the footpaths-checking function (depends on update_bag)\n", " def check_footpaths(bags, marked, k):\n", " \"\"\"Modify bags and marked in-place to account for foot-paths.\"\"\"\n", " q = []\n", " for p in marked:\n", " for pPrime, delta_seconds in transfers[stops[p][2]:stops[p][3]]:\n", " q.append((p, pPrime, delta_seconds))\n", " for p, pPrime, delta_seconds in q:\n", " for L_k in bags[k][p]:\n", " # We do not allow two consecutive walking trips\n", " if not isinstance(L_k, WalkLabel):\n", " L_new = WalkLabel(pPrime, np.timedelta64(delta_seconds, 's'), L_k)\n", " if update_bag(bags[k][pPrime], L_new, k):\n", " marked.add(pPrime)\n", "\n", "# main loop\n", " indent= ' '*4\n", "\n", " k = 0\n", " # Check footpaths leading to p_t at k=0:\n", " check_footpaths(bags, marked, k)\n", - " while True:\n", + " while k < k_max:\n", " k += 1 # k=1 at fist round, as it should.\n", "\n", " # Instead of using best bags, carry over the bags from last round.\n", " # if len(bags <= k):\n", "\n", " bags.append([bags[-1][p].copy() for p in range(n_stops)])\n", "\n", - " print('\\n******************************STARTING round k={}******************************'.format(k))\n", + " print('\\n', ' '*30, 'STARTING round k =', k)\n", + " if len(marked) < 50:\n", + " print('Marked stops at the start of the round: {}'.format(marked))\n", + " else:\n", + " print('Number of marked stops at the start of the round:', len(marked))\n", + " \n", " # accumulate routes serving marked stops from previous rounds\n", " q = []\n", - " print('Marked stops at the start of the round: {}'.format(marked))\n", " for p in marked:\n", " for r in stopRoutes[stops[p][0]:stops[p][1]]: # foreach route r serving p\n", " append_r_p = True\n", " for idx, (rPrime, pPrime) in enumerate(q): # is there already a stop from the same route in q ?\n", " if rPrime == r:\n", " append_r_p = False\n", " p_pos_in_r = np.where(get_stops(r) == p)[0][-1]\n", " pPrime_pos_in_r = np.where(get_stops(r) == pPrime)[0][-1]\n", " if p_pos_in_r > pPrime_pos_in_r:\n", " q[idx] = (r, p) # substituting (rPrime, pPrime) by (r, p)\n", " if append_r_p:\n", " q.append((r, p))\n", " marked.clear() # unmarking all stops\n", "# print(\"Queue:\", q)\n", "\n", "# print('Queue before traversing each route: {}'.format(q))\n", " # traverse each route\n", " for (r, p) in q:\n", "# print('\\n****TRAVERSING ROUTE r={0} from stop p={1}****'.format(r, p))\n", " B_route = [] # new empty route bag\n", "\n", " # we traverse the route backwards (starting at p, not from the end of the route)\n", " stops_of_current_route = get_stops(r)\n", "# print('Stops of current route:', stops_of_current_route)\n", " offset_p = np.asarray(stops_of_current_route == p).nonzero()[0]\n", " if offset_p.size < 1:\n", " if not p in incoherences:\n", " incoherences[p] = set()\n", " incoherences[p].add(r)\n", "# print(\"WARNING: route {r} is said to serve stop {p} in stopRoutes, but stop {p} \"\n", "# \"is not included as a stop of route {r} in routeStops...\".format(p=p, r=r))\n", " offset_p = -1\n", " else:\n", " offset_p = offset_p[-1]\n", " for offset_p_i in range(offset_p, -1, -1):\n", " p_i = stops_of_current_route[offset_p_i]\n", "# print('\\n\\n'+indent+\"p_i: {}\".format(p_i))\n", "\n", " # Update the labels of the route bag:\n", " for L in B_route:\n", " L.update_stop(p_i)\n", "\n", " # Merge B_route into bags[k][p_i]\n", " if merge_bags(bags[k][p_i], B_route, k):\n", "# print(\"marking stop\", p_i)\n", " marked.add(p_i)\n", "\n", " # Can we step out of a later trip at p_i ?\n", " # This is only possible if we already know a way to get from p_i to p_t in < k vehicles\n", " # (i.e., if there is at least one label in bags[k][p_i])\n", " for L_k in bags[k][p_i]:\n", " # Note that k starts at 1 and bags[0][p_t] contains a TargetLabel.\n", "# print('\\n'+indent+'----scanning arrival times for route r={0} at stop p_i={1}----'.format(r, p_i))\n", "\n", " # We check the trips from latest to earliest\n", " for t in range(routes[r][0]-1, -1, -1): # n_trips = routes[r][0]\n", " # Does t_r arrive early enough for us to make the rest \n", " # of the journey from here (tau[k-1][p_i])?\n", " tau_arr = get_arrival_time(r, t, offset_p_i)\n", "# print(indent+'arrival time: ', tau_arr)\n", " if tau_arr <= L_k.tau_dep - tau_change_platform:\n", "\n", " max_delay = L_k.tau_dep - tau_arr - tau_change_platform\n", " max_delay_int = min(max_delay.astype('timedelta64[m]').astype('int'), 30)\n", " \n", " Pr_connection = distrib_delays[calc_stopTimes_idx(r, t, offset_p_i),\n", " max_delay_int + 1]\n", "# print(Pr_connection)\n", " L_new = RouteLabel(get_departure_time(r, t, offset_p_i),\n", " r,\n", " t,\n", " offset_p_i,\n", " L_k,\n", " Pr_connection\n", " )\n", " update_bag(B_route, L_new, k)#:\n", "# print(indent+\"Explored connection from\")\n", "# L_new.pprint(indent*2)\n", "# print(indent+\"to\")\n", "# L_k.pprint(indent*2)\n", "\n", " # We don't want to add a label for every trip that's earlier than tau_dep.\n", " # Instead, we stop once we've found a trip that's safe enough.\n", " if Pr_connection > Pr_threshold:\n", " break\n", " \n", "# print(marked)\n", " # Look at foot-paths (bags and marked are altered in-place):\n", " check_footpaths(bags, marked, k)\n", "# print(marked)\n", - " # stopping criteria (1: reached equilibrium, 2: Found a solution with k-2 trips)\n", + " \n", + " # Report the number of new journeys found this round:\n", + " n_new_journeys = len(bags[k][p_s]) - len(bags[k-1][p_s])\n", + " if n_new_journeys:\n", + " print(\"Found\", n_new_journeys, \"journeys with\", k, \"trips.\")\n", + " # If there's a journey with X trips, we won't look for a solution with X+3 trips:\n", + " if not bags[k-1][p_s]:\n", + " # The condition above means we found a journey for the first time.\n", + " print(\"Will search for journeys with up to\", k+2, \"trips.\")\n", + " k_max = k+2\n", + "\n", + " # Additional stopping criterion: reached equilibrium\n", " if not marked:\n", " print(\"\\n\" + \"*\"*15 + \" THE END \" + \"*\"*15)\n", - " print(\"Equilibrium reached. The end.\")\n", - " break\n", - "# if k > 10:\n", - "# break\n", - " if k>2:\n", - " if bags[k-2][p_s]:\n", - " print(\"\\n\" + \"*\"*15 + \" THE END \" + \"*\"*15)\n", - " print(\"There is a solution with {0} connections. We shall not \"\n", - " \"search for solutions with {1} or more connections\"\n", - " \".\".format(k-3, k)\n", - " )\n", - " break\n", + " print(\"Equilibrium reached with\" + \"out\"*(not bags[k][p_s]),\n", + " \"finding a solution. The end.\")\n", + " return [bags[K][p_s] for K in range(len(bags))], bags\n", + " \n", + " # We have exited the while-loop because k==kmax:\n", + " print(\"\\n\" + \"*\"*15 + \" THE END \" + \"*\"*15)\n", + " if bags[k][p_s]:\n", + " print(\"There is a solution with\", k-2, \"trips.\")\n", + " else:\n", + " print(\"There are no solutions with up to\", k, \"trips.\")\n", + " print(\"We shall not search for journeys with\", k+1, \"or more trips.\")\n", + "\n", " return [bags[K][p_s] for K in range(len(bags))], bags\n", "\n", "def time_sorted_pareto(bags_p):\n", " \"\"\"Input: list of bags, e.g. for one stop and various k.\n", " It is assumed that Pr >= Pr_min for each label.\n", " Output: A Pareto set of non-dominated labels (np.array),\n", " sorted by decreasing departure time.\n", " \"\"\"\n", " res_bag = []\n", " for bag in bags_p:\n", " globals()['merge_bags'](res_bag, bag, 0)\n", " res = np.array(res_bag)\n", " return res[np.argsort([label.tau_dep for label in res])[::-1]]\n", "\n", "def print_solutions(bags_p):\n", " print(\"Showing Pareto set of non-dominated journeys\")\n", " L_s = bags_p[-1][0]\n", " L_t = L_s\n", " while not isinstance(L_t, TargetLabel):\n", " L_t = L_t.next_label\n", " print(\"from\", L_s.stop2str(), \"to\", L_t.stop2str())\n", " print(\"with criteria:\")\n", " print(\" - maximize departure time\")\n", " print(\" - maximize probability of success\")\n", " print(\" - minimize number of individual trips\")\n", " print(\"and constraints:\")\n", " print(\" - arrival time at least\", tau_change_platform,\n", " \"minutes before\", time2str(L_t.tau_dep), \"(2 minutes to walk from the platform to the extraction point).\")\n", " print(\" - probability of success at least {Pr_min:.4f}.\".format(**globals()))\n", " print(\"\\nJourneys are sorted by descending departure time.\")\n", " \n", " for i, label in enumerate(time_sorted_pareto(bags_p)):\n", " print('\\n'*2,'-'*10, 'OPTION', i+1, '-'*10)\n", " label.print_journey()" ] }, { "cell_type": "code", - "execution_count": 32, + "execution_count": 33, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "160 --> 498\n", - "\n", - "******************************STARTING round k=1******************************\n", - "Marked stops at the start of the round: {1105, 498, 1394, 468, 60}\n", + "864 --> 1176\n", "\n", - "******************************STARTING round k=2******************************\n", - "Marked stops at the start of the round: {130, 643, 514, 1285, 1028, 526, 277, 534, 409, 26, 795, 284, 800, 34, 553, 1326, 311, 183, 698, 699, 959, 1090, 834, 1228, 718, 81, 212, 981, 85, 727, 88, 345, 217, 344, 348, 1111, 741, 358, 233, 873, 1003, 491, 1007, 882, 119, 1275, 892, 125, 511}\n", + " STARTING round k = 1\n", + "Marked stops at the start of the round: {1366, 1176, 636, 861, 254}\n", "\n", - "******************************STARTING round k=3******************************\n", - "Marked stops at the start of the round: {0, 2, 4, 7, 10, 12, 14, 16, 20, 22, 24, 25, 27, 30, 32, 33, 37, 39, 41, 43, 48, 50, 53, 57, 61, 68, 69, 71, 75, 77, 79, 81, 82, 84, 86, 90, 91, 96, 98, 102, 104, 106, 108, 109, 113, 118, 120, 126, 128, 130, 131, 132, 138, 144, 145, 147, 148, 149, 154, 155, 161, 162, 163, 166, 168, 169, 170, 173, 174, 175, 176, 177, 180, 181, 183, 184, 186, 187, 190, 202, 205, 210, 211, 218, 221, 228, 231, 234, 237, 238, 244, 247, 252, 253, 256, 259, 262, 265, 266, 267, 268, 271, 272, 273, 275, 276, 279, 284, 288, 290, 298, 302, 303, 305, 309, 311, 312, 322, 325, 327, 333, 334, 336, 337, 345, 346, 348, 349, 351, 352, 353, 357, 358, 362, 364, 370, 371, 375, 376, 386, 388, 389, 390, 399, 402, 404, 408, 410, 421, 422, 423, 424, 425, 427, 430, 434, 436, 440, 444, 445, 446, 451, 452, 453, 454, 455, 465, 472, 473, 476, 478, 483, 488, 490, 492, 496, 499, 502, 504, 505, 509, 521, 522, 526, 527, 528, 530, 532, 539, 547, 549, 550, 553, 556, 559, 562, 567, 570, 572, 575, 577, 578, 581, 582, 583, 587, 589, 591, 598, 602, 606, 609, 614, 615, 616, 618, 622, 628, 629, 630, 634, 640, 648, 653, 654, 656, 657, 659, 660, 663, 665, 667, 673, 687, 690, 696, 697, 699, 700, 704, 709, 710, 711, 712, 713, 715, 716, 717, 718, 720, 721, 723, 724, 731, 732, 733, 739, 740, 742, 745, 747, 748, 750, 751, 753, 754, 759, 760, 761, 763, 764, 766, 767, 768, 771, 774, 777, 779, 780, 781, 785, 786, 787, 788, 789, 790, 792, 796, 801, 815, 820, 822, 823, 824, 825, 827, 828, 830, 831, 832, 838, 839, 841, 843, 844, 847, 850, 852, 853, 858, 859, 867, 868, 871, 873, 877, 879, 880, 881, 883, 884, 887, 888, 889, 890, 892, 896, 898, 899, 907, 909, 917, 923, 924, 925, 926, 927, 933, 934, 935, 936, 940, 941, 945, 947, 957, 958, 959, 960, 963, 964, 966, 968, 969, 971, 975, 976, 978, 981, 983, 987, 989, 991, 992, 993, 1000, 1005, 1009, 1010, 1011, 1015, 1017, 1020, 1022, 1024, 1026, 1027, 1030, 1032, 1033, 1035, 1039, 1041, 1042, 1048, 1051, 1052, 1053, 1055, 1058, 1060, 1062, 1064, 1066, 1067, 1068, 1070, 1071, 1073, 1075, 1078, 1081, 1084, 1092, 1094, 1096, 1102, 1103, 1104, 1108, 1112, 1113, 1114, 1122, 1124, 1125, 1128, 1134, 1136, 1141, 1142, 1143, 1146, 1149, 1151, 1152, 1153, 1154, 1155, 1156, 1157, 1159, 1161, 1163, 1166, 1173, 1175, 1180, 1181, 1182, 1183, 1193, 1196, 1197, 1206, 1208, 1209, 1215, 1216, 1217, 1219, 1221, 1222, 1229, 1232, 1233, 1236, 1237, 1238, 1245, 1246, 1248, 1249, 1253, 1265, 1267, 1268, 1270, 1273, 1274, 1277, 1290, 1296, 1297, 1298, 1303, 1305, 1309, 1311, 1312, 1313, 1314, 1316, 1318, 1325, 1327, 1329, 1332, 1337, 1344, 1346, 1350, 1351, 1352, 1353, 1354, 1365, 1367, 1368, 1369, 1371, 1374, 1376, 1378, 1382, 1385, 1387, 1389, 1390, 1391, 1395, 1396, 1397, 1399, 1403, 1406}\n", + " STARTING round k = 2\n", + "Number of marked stops at the start of the round: 84\n", + "Found 4 journeys with 2 trips.\n", + "Will search for journeys with up to 4 trips.\n", "\n", - "******************************STARTING round k=4******************************\n", - "Marked stops at the start of the round: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 28, 30, 31, 32, 33, 35, 36, 39, 40, 41, 42, 43, 44, 45, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 63, 65, 66, 67, 70, 71, 73, 74, 75, 76, 77, 78, 79, 80, 82, 83, 84, 87, 89, 91, 92, 93, 94, 96, 97, 98, 99, 100, 104, 105, 106, 107, 108, 109, 110, 112, 113, 114, 115, 116, 117, 118, 121, 122, 123, 124, 127, 129, 131, 132, 133, 134, 136, 137, 138, 139, 141, 142, 143, 144, 145, 146, 147, 148, 149, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 162, 163, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 182, 185, 188, 189, 190, 192, 193, 194, 195, 196, 197, 198, 201, 202, 203, 204, 205, 206, 207, 208, 209, 211, 214, 215, 216, 218, 219, 220, 221, 223, 224, 225, 226, 227, 229, 230, 231, 232, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 278, 279, 282, 283, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 305, 306, 307, 309, 310, 312, 313, 314, 316, 317, 318, 319, 320, 321, 322, 323, 325, 326, 328, 329, 330, 331, 332, 336, 337, 338, 339, 340, 341, 343, 347, 352, 353, 354, 356, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 382, 383, 384, 385, 386, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 400, 401, 402, 403, 405, 406, 407, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 425, 426, 427, 428, 429, 431, 432, 433, 436, 437, 439, 440, 441, 442, 443, 444, 445, 448, 449, 450, 452, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 469, 470, 471, 473, 474, 475, 478, 480, 481, 482, 483, 485, 487, 489, 490, 493, 494, 495, 496, 497, 499, 500, 501, 502, 503, 505, 508, 510, 513, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 528, 529, 530, 532, 533, 535, 537, 539, 540, 541, 543, 544, 547, 551, 552, 554, 555, 556, 557, 558, 559, 560, 561, 562, 565, 566, 567, 568, 569, 570, 571, 573, 574, 575, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 589, 590, 591, 592, 594, 597, 598, 599, 601, 602, 604, 605, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 641, 642, 644, 645, 646, 647, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 661, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 692, 693, 694, 695, 696, 697, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 716, 719, 723, 724, 726, 728, 730, 731, 732, 733, 735, 736, 737, 738, 739, 740, 743, 744, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 782, 784, 785, 786, 787, 790, 791, 792, 793, 794, 797, 798, 799, 801, 802, 803, 804, 806, 807, 808, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 824, 825, 826, 827, 829, 830, 831, 832, 833, 835, 836, 837, 838, 840, 841, 842, 843, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 861, 862, 864, 865, 866, 869, 871, 872, 874, 875, 876, 877, 878, 879, 880, 881, 884, 885, 886, 887, 890, 891, 893, 894, 895, 896, 897, 899, 900, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 925, 926, 928, 929, 930, 931, 932, 933, 934, 937, 939, 940, 941, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 954, 955, 956, 957, 958, 961, 962, 963, 965, 968, 969, 970, 971, 972, 974, 975, 976, 977, 979, 980, 982, 983, 984, 985, 986, 987, 990, 991, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1004, 1006, 1008, 1009, 1010, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1029, 1030, 1031, 1032, 1033, 1034, 1035, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1046, 1051, 1052, 1054, 1055, 1056, 1057, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1069, 1071, 1072, 1073, 1074, 1075, 1076, 1077, 1080, 1082, 1083, 1084, 1085, 1086, 1087, 1088, 1091, 1092, 1094, 1096, 1097, 1098, 1099, 1100, 1102, 1103, 1106, 1107, 1110, 1112, 1113, 1114, 1115, 1116, 1117, 1119, 1120, 1122, 1123, 1125, 1126, 1127, 1128, 1130, 1131, 1132, 1133, 1134, 1135, 1136, 1137, 1139, 1140, 1144, 1145, 1146, 1147, 1148, 1149, 1151, 1152, 1153, 1154, 1155, 1156, 1157, 1158, 1159, 1160, 1161, 1162, 1163, 1164, 1166, 1167, 1168, 1169, 1171, 1173, 1174, 1175, 1176, 1177, 1178, 1179, 1180, 1181, 1184, 1185, 1186, 1187, 1188, 1189, 1190, 1191, 1192, 1193, 1195, 1196, 1198, 1199, 1200, 1202, 1203, 1204, 1205, 1206, 1208, 1211, 1212, 1213, 1214, 1215, 1216, 1218, 1220, 1221, 1223, 1224, 1226, 1230, 1231, 1233, 1236, 1238, 1239, 1240, 1241, 1242, 1243, 1244, 1246, 1247, 1248, 1249, 1250, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1260, 1261, 1262, 1265, 1266, 1267, 1268, 1269, 1270, 1271, 1273, 1274, 1276, 1277, 1278, 1279, 1280, 1281, 1282, 1283, 1284, 1287, 1288, 1289, 1290, 1291, 1292, 1293, 1294, 1295, 1296, 1298, 1299, 1300, 1301, 1302, 1304, 1305, 1306, 1308, 1309, 1310, 1312, 1313, 1315, 1316, 1317, 1318, 1319, 1320, 1321, 1322, 1323, 1324, 1327, 1328, 1329, 1330, 1331, 1332, 1333, 1334, 1335, 1336, 1338, 1339, 1340, 1341, 1343, 1344, 1345, 1346, 1347, 1349, 1350, 1351, 1352, 1354, 1355, 1356, 1357, 1358, 1359, 1360, 1362, 1363, 1364, 1366, 1367, 1369, 1370, 1372, 1373, 1374, 1375, 1376, 1377, 1378, 1379, 1381, 1384, 1385, 1386, 1387, 1388, 1389, 1390, 1391, 1392, 1393, 1396, 1397, 1398, 1399, 1400, 1401, 1404, 1405, 1406}\n", + " STARTING round k = 3\n", + "Number of marked stops at the start of the round: 599\n", + "Found 3 journeys with 3 trips.\n", "\n", - "******************************STARTING round k=5******************************\n", - "Marked stops at the start of the round: {1, 3, 4, 5, 10, 11, 12, 13, 17, 18, 19, 21, 28, 30, 31, 32, 33, 35, 40, 41, 47, 49, 51, 52, 56, 58, 59, 63, 66, 67, 70, 71, 74, 75, 76, 77, 78, 82, 84, 87, 89, 91, 92, 96, 97, 98, 99, 100, 104, 107, 113, 114, 115, 116, 117, 122, 123, 124, 127, 131, 133, 134, 136, 137, 138, 139, 141, 142, 144, 147, 148, 149, 151, 153, 154, 156, 157, 160, 162, 165, 167, 169, 170, 171, 175, 176, 177, 179, 189, 190, 193, 194, 195, 196, 197, 198, 203, 204, 205, 206, 207, 209, 214, 216, 218, 219, 221, 223, 224, 225, 226, 227, 230, 231, 232, 235, 239, 240, 242, 243, 245, 247, 248, 250, 251, 252, 255, 258, 262, 264, 267, 268, 270, 271, 273, 274, 275, 276, 279, 283, 286, 287, 289, 291, 292, 294, 296, 297, 298, 299, 300, 301, 306, 307, 309, 314, 319, 320, 321, 322, 325, 326, 328, 331, 332, 336, 337, 339, 341, 343, 347, 353, 354, 356, 357, 360, 363, 365, 366, 368, 371, 372, 373, 374, 376, 380, 382, 383, 384, 385, 388, 390, 391, 393, 394, 396, 397, 400, 402, 403, 404, 405, 407, 411, 412, 415, 416, 417, 421, 426, 427, 428, 429, 432, 433, 437, 439, 440, 443, 445, 448, 449, 450, 452, 455, 456, 457, 458, 461, 462, 463, 464, 465, 466, 467, 469, 470, 471, 473, 480, 481, 487, 494, 495, 497, 501, 505, 510, 513, 518, 519, 520, 522, 525, 530, 532, 533, 535, 537, 539, 543, 544, 551, 552, 557, 558, 559, 560, 561, 566, 567, 569, 570, 571, 573, 581, 583, 584, 587, 590, 594, 597, 598, 599, 601, 607, 608, 611, 612, 613, 615, 617, 619, 621, 623, 624, 625, 626, 628, 630, 631, 632, 633, 639, 641, 642, 647, 649, 652, 653, 654, 655, 656, 661, 664, 666, 668, 669, 670, 671, 672, 674, 676, 677, 678, 679, 681, 682, 683, 684, 685, 686, 687, 688, 689, 692, 695, 697, 701, 702, 705, 707, 710, 711, 713, 716, 719, 723, 726, 728, 732, 733, 735, 736, 737, 738, 739, 740, 743, 744, 746, 748, 749, 751, 753, 757, 758, 760, 762, 763, 764, 766, 767, 768, 769, 773, 775, 776, 779, 782, 784, 786, 787, 791, 792, 793, 794, 798, 802, 803, 804, 807, 808, 812, 814, 816, 818, 819, 821, 822, 824, 825, 826, 827, 829, 831, 833, 835, 836, 840, 842, 845, 846, 847, 848, 851, 852, 853, 855, 857, 864, 869, 871, 872, 874, 875, 876, 877, 879, 881, 890, 891, 893, 895, 897, 899, 902, 903, 904, 906, 907, 908, 909, 910, 911, 915, 918, 919, 920, 922, 928, 929, 930, 931, 937, 943, 944, 945, 946, 949, 951, 952, 955, 956, 961, 962, 965, 968, 974, 975, 976, 977, 980, 982, 983, 984, 985, 986, 990, 994, 997, 998, 999, 1000, 1004, 1006, 1008, 1013, 1014, 1015, 1016, 1017, 1019, 1022, 1024, 1025, 1029, 1031, 1032, 1035, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1055, 1056, 1057, 1059, 1062, 1063, 1066, 1067, 1071, 1072, 1073, 1074, 1076, 1080, 1083, 1084, 1085, 1086, 1091, 1092, 1094, 1097, 1099, 1102, 1103, 1106, 1107, 1110, 1112, 1115, 1116, 1117, 1123, 1124, 1125, 1127, 1131, 1133, 1134, 1135, 1136, 1139, 1145, 1146, 1147, 1148, 1149, 1153, 1154, 1156, 1158, 1164, 1166, 1167, 1168, 1169, 1174, 1177, 1180, 1181, 1184, 1185, 1186, 1187, 1188, 1189, 1190, 1192, 1198, 1199, 1202, 1203, 1205, 1206, 1212, 1214, 1215, 1216, 1219, 1221, 1223, 1224, 1229, 1230, 1238, 1239, 1241, 1242, 1243, 1247, 1248, 1249, 1250, 1254, 1255, 1261, 1262, 1265, 1266, 1274, 1276, 1278, 1279, 1280, 1282, 1283, 1284, 1287, 1288, 1291, 1292, 1293, 1294, 1300, 1302, 1304, 1306, 1308, 1309, 1310, 1313, 1315, 1317, 1318, 1323, 1324, 1327, 1328, 1329, 1333, 1334, 1336, 1339, 1340, 1343, 1345, 1346, 1349, 1352, 1354, 1355, 1356, 1357, 1358, 1359, 1360, 1363, 1364, 1369, 1373, 1375, 1377, 1379, 1381, 1386, 1388, 1389, 1390, 1391, 1393, 1396, 1398, 1399, 1400, 1401, 1404, 1405}\n", + " STARTING round k = 4\n", + "Number of marked stops at the start of the round: 473\n", + "Found -1 journeys with 4 trips.\n", "\n", "*************** THE END ***************\n", - "There is a solution with 2 connections. We shall not search for solutions with 5 or more connections.\n" + "There is a solution with 2 trips.\n", + "We shall not search for journeys with 5 or more trips.\n" ] } ], "source": [ "p_s = np.random.randint(stops.shape[0]) # start stop\n", "p_t = np.random.randint(stops.shape[0]) # target stop\n", "print(p_s, '-->', p_t)\n", "tau_0 = '17:30' # arrival time\n", "Pr_min = 0.9\n", "incoherences = {}\n", "bags_p_s, bags = run_mc_raptor(p_s, p_t, tau_0, Pr_min\n", " , incoherences\n", " )" ] }, { "cell_type": "code", - "execution_count": 33, + "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{}" ] }, - "execution_count": 33, + "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "incoherences" ] }, { "cell_type": "code", - "execution_count": 34, + "execution_count": 27, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Showing Pareto set of non-dominated journeys\n", - "from stop Zürich, Ausserdorfstrasse (8591048) to stop Bergdietikon, Baltenschwil (8590200)\n", + "from stop Zürich, Friedhof Enzenbühl (8591142) to stop Uster, Talacker (8575957)\n", "with criteria:\n", " - maximize departure time\n", " - maximize probability of success\n", " - minimize number of individual trips\n", "and constraints:\n", " - arrival time at least 2 minutes minutes before 17:30 (2 minutes to walk from the platform to the extraction point).\n", " - probability of success at least 0.9000.\n", "\n", "Journeys are sorted by descending departure time.\n", "\n", "\n", " ---------- OPTION 1 ----------\n", - "Departure stop: stop Zürich, Ausserdorfstrasse (8591048)\n", - "Departure time: 16:07\n", + "Departure stop: stop Zürich, Friedhof Enzenbühl (8591142)\n", + "Departure time: 16:15\n", "Number of trips used: 3\n", - "Probability of success: 0.9709\n", - " Walk 6.2 minutes from stop Zürich, Ausserdorfstrasse (8591048)\n", - " to stop Zürich, Seebach (8591354)\n", - " At stop Zürich, Seebach (8591354)\n", - " take the Bus to Zürich Oerlikon, Bahnhof\n", - " leaving at 16:16\n", - " (route id: 26-768-j19-1).\n", - " Get out at stop Zürich Oerlikon, Bahnhof (8580449)\n", - " at 16:21.\n", - " Walk 1.2 minutes from stop Zürich Oerlikon, Bahnhof (8580449)\n", - " to stop Zürich Oerlikon (8503006)\n", - " At stop Zürich Oerlikon (8503006)\n", - " take the S-Bahn to Dietikon\n", - " leaving at 16:34\n", - " (route id: 26-19-j19-1).\n", - " Get out at stop Dietikon (8503508)\n", + "Probability of success: 0.9945\n", + " Walk 4.1 minutes from stop Zürich, Friedhof Enzenbühl (8591142)\n", + " to stop Zürich, Rehalp (8591315)\n", + " At stop Zürich, Rehalp (8591315)\n", + " take the S-Bahn to Zürich Stadelhofen FB\n", + " leaving at 16:22\n", + " (route id: 26-18-j19-1).\n", + " Get out at stop Zürich Stadelhofen FB (8503059)\n", + " at 16:32.\n", + " Walk 0.9 minutes from stop Zürich Stadelhofen FB (8503059)\n", + " to stop Zürich Stadelhofen (8503003)\n", + " At stop Zürich Stadelhofen (8503003)\n", + " take the S-Bahn to Rapperswil\n", + " leaving at 16:42\n", + " (route id: 26-15-j19-1).\n", + " Get out at stop Uster (8503125)\n", " at 16:53.\n", - " At stop Dietikon (8503508)\n", - " take the S-Bahn to Wohlen AG\n", - " leaving at 17:03\n", - " (route id: 1-17-A-j19-1).\n", - " Get out at stop Dietikon Stoffelbach (8502186)\n", - " at 17:06.\n", - " Walk 9.7 minutes from stop Dietikon Stoffelbach (8502186)\n", - " to stop Bergdietikon, Baltenschwil (8590200)\n", - "Target stop: stop Bergdietikon, Baltenschwil (8590200)\n", + " Walk 3.1 minutes from stop Uster (8503125)\n", + " to stop Uster, Bahnhof (8573504)\n", + " At stop Uster, Bahnhof (8573504)\n", + " take the Bus to Oetwil am See, Zentrum\n", + " leaving at 17:15\n", + " (route id: 26-842-j19-1).\n", + " Get out at stop Uster, Talacker (8575957)\n", + " at 17:17.\n", + "Target stop: stop Uster, Talacker (8575957)\n", "Requested arrival time: 17:30\n", "\n", "\n", " ---------- OPTION 2 ----------\n", - "Departure stop: stop Zürich, Ausserdorfstrasse (8591048)\n", - "Departure time: 15:54\n", + "Departure stop: stop Zürich, Friedhof Enzenbühl (8591142)\n", + "Departure time: 16:00\n", "Number of trips used: 3\n", - "Probability of success: 0.9993\n", - " Walk 6.2 minutes from stop Zürich, Ausserdorfstrasse (8591048)\n", - " to stop Zürich, Seebach (8591354)\n", - " At stop Zürich, Seebach (8591354)\n", - " take the Tram to Zürich, Triemli\n", - " leaving at 16:03\n", - " (route id: 26-14-A-j19-1).\n", - " Get out at stop Zürich, Bahnhofquai/HB (8587349)\n", - " at 16:24.\n", - " Walk 2.8 minutes from stop Zürich, Bahnhofquai/HB (8587349)\n", - " to stop Zürich HB (8503000)\n", - " At stop Zürich HB (8503000)\n", - " take the S-Bahn to Muri AG\n", - " leaving at 16:40\n", - " (route id: 1-42-j19-1).\n", - " Get out at stop Dietikon (8503508)\n", - " at 16:51.\n", - " At stop Dietikon (8503508)\n", - " take the S-Bahn to Wohlen AG\n", - " leaving at 17:03\n", - " (route id: 1-17-A-j19-1).\n", - " Get out at stop Dietikon Stoffelbach (8502186)\n", - " at 17:06.\n", - " Walk 9.7 minutes from stop Dietikon Stoffelbach (8502186)\n", - " to stop Bergdietikon, Baltenschwil (8590200)\n", - "Target stop: stop Bergdietikon, Baltenschwil (8590200)\n", + "Probability of success: 0.9945\n", + " Walk 4.1 minutes from stop Zürich, Friedhof Enzenbühl (8591142)\n", + " to stop Zürich, Rehalp (8591315)\n", + " At stop Zürich, Rehalp (8591315)\n", + " take the S-Bahn to Zürich Stadelhofen FB\n", + " leaving at 16:07\n", + " (route id: 26-18-j19-1).\n", + " Get out at stop Zürich Stadelhofen FB (8503059)\n", + " at 16:17.\n", + " Walk 0.9 minutes from stop Zürich Stadelhofen FB (8503059)\n", + " to stop Zürich Stadelhofen (8503003)\n", + " At stop Zürich Stadelhofen (8503003)\n", + " take the S-Bahn to Uster\n", + " leaving at 16:31\n", + " (route id: 26-9-A-j19-1).\n", + " Get out at stop Uster (8503125)\n", + " at 16:50.\n", + " Walk 3.1 minutes from stop Uster (8503125)\n", + " to stop Uster, Bahnhof (8573504)\n", + " At stop Uster, Bahnhof (8573504)\n", + " take the Bus to Oetwil am See, Zentrum\n", + " leaving at 17:15\n", + " (route id: 26-842-j19-1).\n", + " Get out at stop Uster, Talacker (8575957)\n", + " at 17:17.\n", + "Target stop: stop Uster, Talacker (8575957)\n", "Requested arrival time: 17:30\n", "\n", "\n", " ---------- OPTION 3 ----------\n", - "Departure stop: stop Zürich, Ausserdorfstrasse (8591048)\n", - "Departure time: 07:21\n", - "Number of trips used: 3\n", + "Departure stop: stop Zürich, Friedhof Enzenbühl (8591142)\n", + "Departure time: 15:08\n", + "Number of trips used: 4\n", "Probability of success: 1.0000\n", - " At stop Zürich, Ausserdorfstrasse (8591048)\n", - " take the Bus to Zürich, Seebacherplatz\n", - " leaving at 07:21\n", - " (route id: 26-75-A-j19-1).\n", - " Get out at stop Zürich, Seebacherplatz (8591355)\n", - " at 07:24.\n", - " Walk 6.0 minutes from stop Zürich, Seebacherplatz (8591355)\n", - " to stop Zürich Seebach (8503007)\n", - " At stop Zürich Seebach (8503007)\n", - " take the S-Bahn to Baden\n", - " leaving at 16:11\n", - " (route id: 26-6-A-j19-1).\n", - " Get out at stop Regensdorf-Watt (8503526)\n", - " at 16:18.\n", - " Walk 8.9 minutes from stop Regensdorf-Watt (8503526)\n", - " to stop Rudolfstetten Hofacker (8502187)\n", - " At stop Rudolfstetten Hofacker (8502187)\n", - " take the S-Bahn to Dietikon\n", - " leaving at 17:01\n", - " (route id: 1-17-A-j19-1).\n", - " Get out at stop Dietikon Stoffelbach (8502186)\n", - " at 17:06.\n", - " Walk 9.7 minutes from stop Dietikon Stoffelbach (8502186)\n", - " to stop Bergdietikon, Baltenschwil (8590200)\n", - "Target stop: stop Bergdietikon, Baltenschwil (8590200)\n", + " At stop Zürich, Friedhof Enzenbühl (8591142)\n", + " take the Tram to Zürich, Auzelg\n", + " leaving at 15:08\n", + " (route id: 26-11-A-j19-1).\n", + " Get out at stop Zürich, Stampfenbachplatz (8591379)\n", + " at 15:33.\n", + " Walk 5.4 minutes from stop Zürich, Stampfenbachplatz (8591379)\n", + " to stop Zürich HB (8503000)\n", + " At stop Zürich HB (8503000)\n", + " take the S-Bahn to Pfäffikon SZ\n", + " leaving at 15:54\n", + " (route id: 26-5-A-j19-1).\n", + " Get out at stop Uster (8503125)\n", + " at 16:08.\n", + " Walk 3.1 minutes from stop Uster (8503125)\n", + " to stop Uster, Bahnhof (8573504)\n", + " At stop Uster, Bahnhof (8573504)\n", + " take the Bus to Uster, Nossikon\n", + " leaving at 16:30\n", + " (route id: 26-813-j19-1).\n", + " Get out at stop Uster, Nossikon (8581546)\n", + " at 16:36.\n", + " At stop Uster, Nossikon (8581546)\n", + " take the Bus to Uster, Bahnhof\n", + " leaving at 16:52\n", + " (route id: 26-813-j19-1).\n", + " Get out at stop Uster, Apothekerstrasse (8581539)\n", + " at 16:55.\n", + " Walk 6.5 minutes from stop Uster, Apothekerstrasse (8581539)\n", + " to stop Uster, Talacker (8575957)\n", + "Target stop: stop Uster, Talacker (8575957)\n", "Requested arrival time: 17:30\n" ] } ], "source": [ "print_solutions(bags_p_s)" ] }, { "cell_type": "code", "execution_count": 75, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", " ---------- OPTION 1\n", "Journey begins at stop Zürich, Tüffenwies (8591402) at time 2020-05-24T16:56:00.000000000, with an overall probability of success = 0.983739837398374 \n", "\n", " At stop Zürich, Tüffenwies (8591402), take route 1315 headed to Zürich, Albisgütli at time 2020-05-24T16:56:00.000000000.\n", " Get out at stop Zürich, Bahnhofquai/HB (8587349) at time 2020-05-24T17:13:00.000000000.\n", "Walk 334 seconds from stop Zürich, Bahnhofquai/HB (8587349) to stop Zürich, Stampfenbachplatz (8591379).\n", "You have arrived at the target stop Zürich, Stampfenbachplatz (8591379) before the target time of 2020-05-24T17:30.\n", "\n", "\n", " ---------- OPTION 2\n", "Journey begins at stop Zürich, Tüffenwies (8591402) at time 2020-05-24T16:18:00.000000000, with an overall probability of success = 1.0 \n", "\n", " At stop Zürich, Tüffenwies (8591402), take route 726 headed to Zürich, Bahnhofplatz/HB at time 2020-05-24T16:18:00.000000000.\n", " Get out at stop Zürich, Bahnhofplatz/HB (8587348) at time 2020-05-24T16:37:00.000000000.\n", "Walk 479 seconds from stop Zürich, Bahnhofplatz/HB (8587348) to stop Zürich, Stampfenbachplatz (8591379).\n", "You have arrived at the target stop Zürich, Stampfenbachplatz (8591379) before the target time of 2020-05-24T17:30.\n", "\n", "\n", " ---------- OPTION 3\n", "Journey begins at stop Zürich, Tüffenwies (8591402) at time 2020-05-24T16:42:10.000000000, with an overall probability of success = 0.991869918699187 \n", "\n", "Walk 590 seconds from stop Zürich, Tüffenwies (8591402) to stop Zürich, Grünaustrasse (8591167).\n", " At stop Zürich, Grünaustrasse (8591167), take route 1315 headed to Zürich, Albisgütli at time 2020-05-24T16:54:00.000000000.\n", " Get out at stop Zürich, Sihlquai/HB (8591368) at time 2020-05-24T17:10:00.000000000.\n", "Walk 470 seconds from stop Zürich, Sihlquai/HB (8591368) to stop Zürich, Stampfenbachplatz (8591379).\n", "You have arrived at the target stop Zürich, Stampfenbachplatz (8591379) before the target time of 2020-05-24T17:30.\n" ] } ], "source": [ "for i, label in enumerate(bags[][p_s]):\n", " print('\\n'*2,'-'*10, 'OPTION', i+1)\n", " label.print_journey()" ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "n_stops: 11\n", "route_stops: [1007 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027]\n" ] }, { "data": { "text/plain": [ "array([[ 'NaT', '2020-05-21T07:03:00.000000000'],\n", " ['2020-05-21T07:05:00.000000000', '2020-05-21T07:05:00.000000000'],\n", " ['2020-05-21T07:06:00.000000000', '2020-05-21T07:06:00.000000000'],\n", " ['2020-05-21T07:08:00.000000000', '2020-05-21T07:08:00.000000000'],\n", " ['2020-05-21T07:09:00.000000000', '2020-05-21T07:09:00.000000000'],\n", " ['2020-05-21T07:10:00.000000000', '2020-05-21T07:10:00.000000000'],\n", " ['2020-05-21T07:11:00.000000000', '2020-05-21T07:11:00.000000000'],\n", " ['2020-05-21T07:12:00.000000000', '2020-05-21T07:12:00.000000000'],\n", " ['2020-05-21T07:14:00.000000000', '2020-05-21T07:14:00.000000000'],\n", " ['2020-05-21T07:15:00.000000000', '2020-05-21T07:15:00.000000000'],\n", " ['2020-05-21T07:16:00.000000000', '2020-05-21T07:16:00.000000000'],\n", " ['2020-05-21T07:17:00.000000000', '2020-05-21T07:17:00.000000000'],\n", " ['2020-05-21T07:18:00.000000000', '2020-05-21T07:18:00.000000000'],\n", " ['2020-05-21T07:19:00.000000000', '2020-05-21T07:19:00.000000000'],\n", " ['2020-05-21T07:20:00.000000000', '2020-05-21T07:20:00.000000000'],\n", " ['2020-05-21T07:21:00.000000000', '2020-05-21T07:21:00.000000000'],\n", " ['2020-05-21T07:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T07:33:00.000000000'],\n", " ['2020-05-21T07:35:00.000000000', '2020-05-21T07:35:00.000000000'],\n", " ['2020-05-21T07:36:00.000000000', '2020-05-21T07:36:00.000000000'],\n", " ['2020-05-21T07:38:00.000000000', '2020-05-21T07:38:00.000000000'],\n", " ['2020-05-21T07:39:00.000000000', '2020-05-21T07:39:00.000000000'],\n", " ['2020-05-21T07:40:00.000000000', '2020-05-21T07:40:00.000000000'],\n", " ['2020-05-21T07:41:00.000000000', '2020-05-21T07:41:00.000000000'],\n", " ['2020-05-21T07:42:00.000000000', '2020-05-21T07:42:00.000000000'],\n", " ['2020-05-21T07:44:00.000000000', '2020-05-21T07:44:00.000000000'],\n", " ['2020-05-21T07:45:00.000000000', '2020-05-21T07:45:00.000000000'],\n", " ['2020-05-21T07:46:00.000000000', '2020-05-21T07:46:00.000000000'],\n", " ['2020-05-21T07:47:00.000000000', '2020-05-21T07:47:00.000000000'],\n", " ['2020-05-21T07:48:00.000000000', '2020-05-21T07:48:00.000000000'],\n", " ['2020-05-21T07:49:00.000000000', '2020-05-21T07:49:00.000000000'],\n", " ['2020-05-21T07:50:00.000000000', '2020-05-21T07:50:00.000000000'],\n", " ['2020-05-21T07:51:00.000000000', '2020-05-21T07:51:00.000000000'],\n", " ['2020-05-21T07:52:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T08:03:00.000000000'],\n", " ['2020-05-21T08:05:00.000000000', '2020-05-21T08:05:00.000000000'],\n", " ['2020-05-21T08:06:00.000000000', '2020-05-21T08:06:00.000000000'],\n", " ['2020-05-21T08:08:00.000000000', '2020-05-21T08:08:00.000000000'],\n", " ['2020-05-21T08:09:00.000000000', '2020-05-21T08:09:00.000000000'],\n", " ['2020-05-21T08:10:00.000000000', '2020-05-21T08:10:00.000000000'],\n", " ['2020-05-21T08:11:00.000000000', '2020-05-21T08:11:00.000000000'],\n", " ['2020-05-21T08:12:00.000000000', '2020-05-21T08:12:00.000000000'],\n", " ['2020-05-21T08:14:00.000000000', '2020-05-21T08:14:00.000000000'],\n", " ['2020-05-21T08:15:00.000000000', '2020-05-21T08:15:00.000000000'],\n", " ['2020-05-21T08:16:00.000000000', '2020-05-21T08:16:00.000000000'],\n", " ['2020-05-21T08:17:00.000000000', '2020-05-21T08:17:00.000000000'],\n", " ['2020-05-21T08:18:00.000000000', '2020-05-21T08:18:00.000000000'],\n", " ['2020-05-21T08:19:00.000000000', '2020-05-21T08:19:00.000000000'],\n", " ['2020-05-21T08:20:00.000000000', '2020-05-21T08:20:00.000000000'],\n", " ['2020-05-21T08:21:00.000000000', '2020-05-21T08:21:00.000000000'],\n", " ['2020-05-21T08:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T08:33:00.000000000'],\n", " ['2020-05-21T08:35:00.000000000', '2020-05-21T08:35:00.000000000'],\n", " ['2020-05-21T08:36:00.000000000', '2020-05-21T08:36:00.000000000'],\n", " ['2020-05-21T08:38:00.000000000', '2020-05-21T08:38:00.000000000'],\n", " ['2020-05-21T08:39:00.000000000', '2020-05-21T08:39:00.000000000'],\n", " ['2020-05-21T08:40:00.000000000', '2020-05-21T08:40:00.000000000'],\n", " ['2020-05-21T08:41:00.000000000', '2020-05-21T08:41:00.000000000'],\n", " ['2020-05-21T08:42:00.000000000', '2020-05-21T08:42:00.000000000'],\n", " ['2020-05-21T08:44:00.000000000', '2020-05-21T08:44:00.000000000'],\n", " ['2020-05-21T08:45:00.000000000', '2020-05-21T08:45:00.000000000'],\n", " ['2020-05-21T08:46:00.000000000', '2020-05-21T08:46:00.000000000'],\n", " ['2020-05-21T08:47:00.000000000', '2020-05-21T08:47:00.000000000'],\n", " ['2020-05-21T08:48:00.000000000', '2020-05-21T08:48:00.000000000'],\n", " ['2020-05-21T08:49:00.000000000', '2020-05-21T08:49:00.000000000'],\n", " ['2020-05-21T08:50:00.000000000', '2020-05-21T08:50:00.000000000'],\n", " ['2020-05-21T08:51:00.000000000', '2020-05-21T08:51:00.000000000'],\n", " ['2020-05-21T08:52:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T09:03:00.000000000'],\n", " ['2020-05-21T09:05:00.000000000', '2020-05-21T09:05:00.000000000'],\n", " ['2020-05-21T09:06:00.000000000', '2020-05-21T09:06:00.000000000'],\n", " ['2020-05-21T09:08:00.000000000', '2020-05-21T09:08:00.000000000'],\n", " ['2020-05-21T09:09:00.000000000', '2020-05-21T09:09:00.000000000'],\n", " ['2020-05-21T09:10:00.000000000', '2020-05-21T09:10:00.000000000'],\n", " ['2020-05-21T09:11:00.000000000', '2020-05-21T09:11:00.000000000'],\n", " ['2020-05-21T09:12:00.000000000', '2020-05-21T09:12:00.000000000'],\n", " ['2020-05-21T09:14:00.000000000', '2020-05-21T09:14:00.000000000'],\n", " ['2020-05-21T09:15:00.000000000', '2020-05-21T09:15:00.000000000'],\n", " ['2020-05-21T09:16:00.000000000', '2020-05-21T09:16:00.000000000'],\n", " ['2020-05-21T09:17:00.000000000', '2020-05-21T09:17:00.000000000'],\n", " ['2020-05-21T09:18:00.000000000', '2020-05-21T09:18:00.000000000'],\n", " ['2020-05-21T09:19:00.000000000', '2020-05-21T09:19:00.000000000'],\n", " ['2020-05-21T09:20:00.000000000', '2020-05-21T09:20:00.000000000'],\n", " ['2020-05-21T09:21:00.000000000', '2020-05-21T09:21:00.000000000'],\n", " ['2020-05-21T09:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T10:03:00.000000000'],\n", " ['2020-05-21T10:05:00.000000000', '2020-05-21T10:05:00.000000000'],\n", " ['2020-05-21T10:06:00.000000000', '2020-05-21T10:06:00.000000000'],\n", " ['2020-05-21T10:08:00.000000000', '2020-05-21T10:08:00.000000000'],\n", " ['2020-05-21T10:09:00.000000000', '2020-05-21T10:09:00.000000000'],\n", " ['2020-05-21T10:10:00.000000000', '2020-05-21T10:10:00.000000000'],\n", " ['2020-05-21T10:11:00.000000000', '2020-05-21T10:11:00.000000000'],\n", " ['2020-05-21T10:12:00.000000000', '2020-05-21T10:12:00.000000000'],\n", " ['2020-05-21T10:14:00.000000000', '2020-05-21T10:14:00.000000000'],\n", " ['2020-05-21T10:15:00.000000000', '2020-05-21T10:15:00.000000000'],\n", " ['2020-05-21T10:16:00.000000000', '2020-05-21T10:16:00.000000000'],\n", " ['2020-05-21T10:17:00.000000000', '2020-05-21T10:17:00.000000000'],\n", " ['2020-05-21T10:18:00.000000000', '2020-05-21T10:18:00.000000000'],\n", " ['2020-05-21T10:19:00.000000000', '2020-05-21T10:19:00.000000000'],\n", " ['2020-05-21T10:20:00.000000000', '2020-05-21T10:20:00.000000000'],\n", " ['2020-05-21T10:21:00.000000000', '2020-05-21T10:21:00.000000000'],\n", " ['2020-05-21T10:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T11:03:00.000000000'],\n", " ['2020-05-21T11:05:00.000000000', '2020-05-21T11:05:00.000000000'],\n", " ['2020-05-21T11:06:00.000000000', '2020-05-21T11:06:00.000000000'],\n", " ['2020-05-21T11:08:00.000000000', '2020-05-21T11:08:00.000000000'],\n", " ['2020-05-21T11:09:00.000000000', '2020-05-21T11:09:00.000000000'],\n", " ['2020-05-21T11:10:00.000000000', '2020-05-21T11:10:00.000000000'],\n", " ['2020-05-21T11:11:00.000000000', '2020-05-21T11:11:00.000000000'],\n", " ['2020-05-21T11:12:00.000000000', '2020-05-21T11:12:00.000000000'],\n", " ['2020-05-21T11:14:00.000000000', '2020-05-21T11:14:00.000000000'],\n", " ['2020-05-21T11:15:00.000000000', '2020-05-21T11:15:00.000000000'],\n", " ['2020-05-21T11:16:00.000000000', '2020-05-21T11:16:00.000000000'],\n", " ['2020-05-21T11:17:00.000000000', '2020-05-21T11:17:00.000000000'],\n", " ['2020-05-21T11:18:00.000000000', '2020-05-21T11:18:00.000000000'],\n", " ['2020-05-21T11:19:00.000000000', '2020-05-21T11:19:00.000000000'],\n", " ['2020-05-21T11:20:00.000000000', '2020-05-21T11:20:00.000000000'],\n", " ['2020-05-21T11:21:00.000000000', '2020-05-21T11:21:00.000000000'],\n", " ['2020-05-21T11:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T12:03:00.000000000'],\n", " ['2020-05-21T12:05:00.000000000', '2020-05-21T12:05:00.000000000'],\n", " ['2020-05-21T12:06:00.000000000', '2020-05-21T12:06:00.000000000'],\n", " ['2020-05-21T12:08:00.000000000', '2020-05-21T12:08:00.000000000'],\n", " ['2020-05-21T12:09:00.000000000', '2020-05-21T12:09:00.000000000'],\n", " ['2020-05-21T12:10:00.000000000', '2020-05-21T12:10:00.000000000'],\n", " ['2020-05-21T12:11:00.000000000', '2020-05-21T12:11:00.000000000'],\n", " ['2020-05-21T12:12:00.000000000', '2020-05-21T12:12:00.000000000'],\n", " ['2020-05-21T12:14:00.000000000', '2020-05-21T12:14:00.000000000'],\n", " ['2020-05-21T12:15:00.000000000', '2020-05-21T12:15:00.000000000'],\n", " ['2020-05-21T12:16:00.000000000', '2020-05-21T12:16:00.000000000'],\n", " ['2020-05-21T12:17:00.000000000', '2020-05-21T12:17:00.000000000'],\n", " ['2020-05-21T12:18:00.000000000', '2020-05-21T12:18:00.000000000'],\n", " ['2020-05-21T12:19:00.000000000', '2020-05-21T12:19:00.000000000'],\n", " ['2020-05-21T12:20:00.000000000', '2020-05-21T12:20:00.000000000'],\n", " ['2020-05-21T12:21:00.000000000', '2020-05-21T12:21:00.000000000'],\n", " ['2020-05-21T12:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T13:03:00.000000000'],\n", " ['2020-05-21T13:05:00.000000000', '2020-05-21T13:05:00.000000000'],\n", " ['2020-05-21T13:06:00.000000000', '2020-05-21T13:06:00.000000000'],\n", " ['2020-05-21T13:08:00.000000000', '2020-05-21T13:08:00.000000000'],\n", " ['2020-05-21T13:09:00.000000000', '2020-05-21T13:09:00.000000000'],\n", " ['2020-05-21T13:10:00.000000000', '2020-05-21T13:10:00.000000000'],\n", " ['2020-05-21T13:11:00.000000000', '2020-05-21T13:11:00.000000000'],\n", " ['2020-05-21T13:12:00.000000000', '2020-05-21T13:12:00.000000000'],\n", " ['2020-05-21T13:14:00.000000000', '2020-05-21T13:14:00.000000000'],\n", " ['2020-05-21T13:15:00.000000000', '2020-05-21T13:15:00.000000000'],\n", " ['2020-05-21T13:16:00.000000000', '2020-05-21T13:16:00.000000000'],\n", " ['2020-05-21T13:17:00.000000000', '2020-05-21T13:17:00.000000000'],\n", " ['2020-05-21T13:18:00.000000000', '2020-05-21T13:18:00.000000000'],\n", " ['2020-05-21T13:19:00.000000000', '2020-05-21T13:19:00.000000000'],\n", " ['2020-05-21T13:20:00.000000000', '2020-05-21T13:20:00.000000000'],\n", " ['2020-05-21T13:21:00.000000000', '2020-05-21T13:21:00.000000000'],\n", " ['2020-05-21T13:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T14:03:00.000000000'],\n", " ['2020-05-21T14:05:00.000000000', '2020-05-21T14:05:00.000000000'],\n", " ['2020-05-21T14:06:00.000000000', '2020-05-21T14:06:00.000000000'],\n", " ['2020-05-21T14:08:00.000000000', '2020-05-21T14:08:00.000000000'],\n", " ['2020-05-21T14:09:00.000000000', '2020-05-21T14:09:00.000000000'],\n", " ['2020-05-21T14:10:00.000000000', '2020-05-21T14:10:00.000000000'],\n", " ['2020-05-21T14:11:00.000000000', '2020-05-21T14:11:00.000000000'],\n", " ['2020-05-21T14:12:00.000000000', '2020-05-21T14:12:00.000000000'],\n", " ['2020-05-21T14:14:00.000000000', '2020-05-21T14:14:00.000000000'],\n", " ['2020-05-21T14:15:00.000000000', '2020-05-21T14:15:00.000000000'],\n", " ['2020-05-21T14:16:00.000000000', '2020-05-21T14:16:00.000000000'],\n", " ['2020-05-21T14:17:00.000000000', '2020-05-21T14:17:00.000000000'],\n", " ['2020-05-21T14:18:00.000000000', '2020-05-21T14:18:00.000000000'],\n", " ['2020-05-21T14:19:00.000000000', '2020-05-21T14:19:00.000000000'],\n", " ['2020-05-21T14:20:00.000000000', '2020-05-21T14:20:00.000000000'],\n", " ['2020-05-21T14:21:00.000000000', '2020-05-21T14:21:00.000000000'],\n", " ['2020-05-21T14:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T15:03:00.000000000'],\n", " ['2020-05-21T15:05:00.000000000', '2020-05-21T15:05:00.000000000'],\n", " ['2020-05-21T15:06:00.000000000', '2020-05-21T15:06:00.000000000'],\n", " ['2020-05-21T15:08:00.000000000', '2020-05-21T15:08:00.000000000'],\n", " ['2020-05-21T15:09:00.000000000', '2020-05-21T15:09:00.000000000'],\n", " ['2020-05-21T15:10:00.000000000', '2020-05-21T15:10:00.000000000'],\n", " ['2020-05-21T15:11:00.000000000', '2020-05-21T15:11:00.000000000'],\n", " ['2020-05-21T15:12:00.000000000', '2020-05-21T15:12:00.000000000'],\n", " ['2020-05-21T15:14:00.000000000', '2020-05-21T15:14:00.000000000'],\n", " ['2020-05-21T15:15:00.000000000', '2020-05-21T15:15:00.000000000'],\n", " ['2020-05-21T15:16:00.000000000', '2020-05-21T15:16:00.000000000'],\n", " ['2020-05-21T15:17:00.000000000', '2020-05-21T15:17:00.000000000'],\n", " ['2020-05-21T15:18:00.000000000', '2020-05-21T15:18:00.000000000'],\n", " ['2020-05-21T15:19:00.000000000', '2020-05-21T15:19:00.000000000'],\n", " ['2020-05-21T15:20:00.000000000', '2020-05-21T15:20:00.000000000'],\n", " ['2020-05-21T15:21:00.000000000', '2020-05-21T15:21:00.000000000'],\n", " ['2020-05-21T15:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T16:03:00.000000000'],\n", " ['2020-05-21T16:05:00.000000000', '2020-05-21T16:05:00.000000000'],\n", " ['2020-05-21T16:06:00.000000000', '2020-05-21T16:06:00.000000000'],\n", " ['2020-05-21T16:08:00.000000000', '2020-05-21T16:08:00.000000000'],\n", " ['2020-05-21T16:09:00.000000000', '2020-05-21T16:09:00.000000000'],\n", " ['2020-05-21T16:10:00.000000000', '2020-05-21T16:10:00.000000000'],\n", " ['2020-05-21T16:11:00.000000000', '2020-05-21T16:11:00.000000000'],\n", " ['2020-05-21T16:12:00.000000000', '2020-05-21T16:12:00.000000000'],\n", " ['2020-05-21T16:14:00.000000000', '2020-05-21T16:14:00.000000000'],\n", " ['2020-05-21T16:15:00.000000000', '2020-05-21T16:15:00.000000000'],\n", " ['2020-05-21T16:16:00.000000000', '2020-05-21T16:16:00.000000000'],\n", " ['2020-05-21T16:17:00.000000000', '2020-05-21T16:17:00.000000000'],\n", " ['2020-05-21T16:18:00.000000000', '2020-05-21T16:18:00.000000000'],\n", " ['2020-05-21T16:19:00.000000000', '2020-05-21T16:19:00.000000000'],\n", " ['2020-05-21T16:20:00.000000000', '2020-05-21T16:20:00.000000000'],\n", " ['2020-05-21T16:21:00.000000000', '2020-05-21T16:21:00.000000000'],\n", " ['2020-05-21T16:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T16:33:00.000000000'],\n", " ['2020-05-21T16:35:00.000000000', '2020-05-21T16:35:00.000000000'],\n", " ['2020-05-21T16:36:00.000000000', '2020-05-21T16:36:00.000000000'],\n", " ['2020-05-21T16:38:00.000000000', '2020-05-21T16:38:00.000000000'],\n", " ['2020-05-21T16:39:00.000000000', '2020-05-21T16:39:00.000000000'],\n", " ['2020-05-21T16:40:00.000000000', '2020-05-21T16:40:00.000000000'],\n", " ['2020-05-21T16:41:00.000000000', '2020-05-21T16:41:00.000000000'],\n", " ['2020-05-21T16:42:00.000000000', '2020-05-21T16:42:00.000000000'],\n", " ['2020-05-21T16:44:00.000000000', '2020-05-21T16:44:00.000000000'],\n", " ['2020-05-21T16:45:00.000000000', '2020-05-21T16:45:00.000000000'],\n", " ['2020-05-21T16:46:00.000000000', '2020-05-21T16:46:00.000000000'],\n", " ['2020-05-21T16:47:00.000000000', '2020-05-21T16:47:00.000000000'],\n", " ['2020-05-21T16:48:00.000000000', '2020-05-21T16:48:00.000000000'],\n", " ['2020-05-21T16:49:00.000000000', '2020-05-21T16:49:00.000000000'],\n", " ['2020-05-21T16:50:00.000000000', '2020-05-21T16:50:00.000000000'],\n", " ['2020-05-21T16:51:00.000000000', '2020-05-21T16:51:00.000000000'],\n", " ['2020-05-21T16:52:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T17:03:00.000000000'],\n", " ['2020-05-21T17:05:00.000000000', '2020-05-21T17:05:00.000000000'],\n", " ['2020-05-21T17:06:00.000000000', '2020-05-21T17:06:00.000000000'],\n", " ['2020-05-21T17:08:00.000000000', '2020-05-21T17:08:00.000000000'],\n", " ['2020-05-21T17:09:00.000000000', '2020-05-21T17:09:00.000000000'],\n", " ['2020-05-21T17:10:00.000000000', '2020-05-21T17:10:00.000000000'],\n", " ['2020-05-21T17:11:00.000000000', '2020-05-21T17:11:00.000000000'],\n", " ['2020-05-21T17:12:00.000000000', '2020-05-21T17:12:00.000000000'],\n", " ['2020-05-21T17:14:00.000000000', '2020-05-21T17:14:00.000000000'],\n", " ['2020-05-21T17:15:00.000000000', '2020-05-21T17:15:00.000000000'],\n", " ['2020-05-21T17:16:00.000000000', '2020-05-21T17:16:00.000000000'],\n", " ['2020-05-21T17:17:00.000000000', '2020-05-21T17:17:00.000000000'],\n", " ['2020-05-21T17:18:00.000000000', '2020-05-21T17:18:00.000000000'],\n", " ['2020-05-21T17:19:00.000000000', '2020-05-21T17:19:00.000000000'],\n", " ['2020-05-21T17:20:00.000000000', '2020-05-21T17:20:00.000000000'],\n", " ['2020-05-21T17:21:00.000000000', '2020-05-21T17:21:00.000000000'],\n", " ['2020-05-21T17:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T17:33:00.000000000'],\n", " ['2020-05-21T17:35:00.000000000', '2020-05-21T17:35:00.000000000'],\n", " ['2020-05-21T17:36:00.000000000', '2020-05-21T17:36:00.000000000'],\n", " ['2020-05-21T17:38:00.000000000', '2020-05-21T17:38:00.000000000'],\n", " ['2020-05-21T17:39:00.000000000', '2020-05-21T17:39:00.000000000'],\n", " ['2020-05-21T17:40:00.000000000', '2020-05-21T17:40:00.000000000'],\n", " ['2020-05-21T17:41:00.000000000', '2020-05-21T17:41:00.000000000'],\n", " ['2020-05-21T17:42:00.000000000', '2020-05-21T17:42:00.000000000'],\n", " ['2020-05-21T17:44:00.000000000', '2020-05-21T17:44:00.000000000'],\n", " ['2020-05-21T17:45:00.000000000', '2020-05-21T17:45:00.000000000'],\n", " ['2020-05-21T17:46:00.000000000', '2020-05-21T17:46:00.000000000'],\n", " ['2020-05-21T17:47:00.000000000', '2020-05-21T17:47:00.000000000'],\n", " ['2020-05-21T17:48:00.000000000', '2020-05-21T17:48:00.000000000'],\n", " ['2020-05-21T17:49:00.000000000', '2020-05-21T17:49:00.000000000'],\n", " ['2020-05-21T17:50:00.000000000', '2020-05-21T17:50:00.000000000'],\n", " ['2020-05-21T17:51:00.000000000', '2020-05-21T17:51:00.000000000'],\n", " ['2020-05-21T17:52:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T18:03:00.000000000'],\n", " ['2020-05-21T18:05:00.000000000', '2020-05-21T18:05:00.000000000'],\n", " ['2020-05-21T18:06:00.000000000', '2020-05-21T18:06:00.000000000'],\n", " ['2020-05-21T18:08:00.000000000', '2020-05-21T18:08:00.000000000'],\n", " ['2020-05-21T18:09:00.000000000', '2020-05-21T18:09:00.000000000'],\n", " ['2020-05-21T18:10:00.000000000', '2020-05-21T18:10:00.000000000'],\n", " ['2020-05-21T18:11:00.000000000', '2020-05-21T18:11:00.000000000'],\n", " ['2020-05-21T18:12:00.000000000', '2020-05-21T18:12:00.000000000'],\n", " ['2020-05-21T18:14:00.000000000', '2020-05-21T18:14:00.000000000'],\n", " ['2020-05-21T18:15:00.000000000', '2020-05-21T18:15:00.000000000'],\n", " ['2020-05-21T18:16:00.000000000', '2020-05-21T18:16:00.000000000'],\n", " ['2020-05-21T18:17:00.000000000', '2020-05-21T18:17:00.000000000'],\n", " ['2020-05-21T18:18:00.000000000', '2020-05-21T18:18:00.000000000'],\n", " ['2020-05-21T18:19:00.000000000', '2020-05-21T18:19:00.000000000'],\n", " ['2020-05-21T18:20:00.000000000', '2020-05-21T18:20:00.000000000'],\n", " ['2020-05-21T18:21:00.000000000', '2020-05-21T18:21:00.000000000'],\n", " ['2020-05-21T18:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T18:33:00.000000000'],\n", " ['2020-05-21T18:35:00.000000000', '2020-05-21T18:35:00.000000000'],\n", " ['2020-05-21T18:36:00.000000000', '2020-05-21T18:36:00.000000000'],\n", " ['2020-05-21T18:38:00.000000000', '2020-05-21T18:38:00.000000000'],\n", " ['2020-05-21T18:39:00.000000000', '2020-05-21T18:39:00.000000000'],\n", " ['2020-05-21T18:40:00.000000000', '2020-05-21T18:40:00.000000000'],\n", " ['2020-05-21T18:41:00.000000000', '2020-05-21T18:41:00.000000000'],\n", " ['2020-05-21T18:42:00.000000000', '2020-05-21T18:42:00.000000000'],\n", " ['2020-05-21T18:44:00.000000000', '2020-05-21T18:44:00.000000000'],\n", " ['2020-05-21T18:45:00.000000000', '2020-05-21T18:45:00.000000000'],\n", " ['2020-05-21T18:46:00.000000000', '2020-05-21T18:46:00.000000000'],\n", " ['2020-05-21T18:47:00.000000000', '2020-05-21T18:47:00.000000000'],\n", " ['2020-05-21T18:48:00.000000000', '2020-05-21T18:48:00.000000000'],\n", " ['2020-05-21T18:49:00.000000000', '2020-05-21T18:49:00.000000000'],\n", " ['2020-05-21T18:50:00.000000000', '2020-05-21T18:50:00.000000000'],\n", " ['2020-05-21T18:51:00.000000000', '2020-05-21T18:51:00.000000000'],\n", " ['2020-05-21T18:52:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T19:03:00.000000000'],\n", " ['2020-05-21T19:05:00.000000000', '2020-05-21T19:05:00.000000000'],\n", " ['2020-05-21T19:06:00.000000000', '2020-05-21T19:06:00.000000000'],\n", " ['2020-05-21T19:08:00.000000000', '2020-05-21T19:08:00.000000000'],\n", " ['2020-05-21T19:09:00.000000000', '2020-05-21T19:09:00.000000000'],\n", " ['2020-05-21T19:10:00.000000000', '2020-05-21T19:10:00.000000000'],\n", " ['2020-05-21T19:11:00.000000000', '2020-05-21T19:11:00.000000000'],\n", " ['2020-05-21T19:12:00.000000000', '2020-05-21T19:12:00.000000000'],\n", " ['2020-05-21T19:14:00.000000000', '2020-05-21T19:14:00.000000000'],\n", " ['2020-05-21T19:15:00.000000000', '2020-05-21T19:15:00.000000000'],\n", " ['2020-05-21T19:16:00.000000000', '2020-05-21T19:16:00.000000000'],\n", " ['2020-05-21T19:17:00.000000000', '2020-05-21T19:17:00.000000000'],\n", " ['2020-05-21T19:18:00.000000000', '2020-05-21T19:18:00.000000000'],\n", " ['2020-05-21T19:19:00.000000000', '2020-05-21T19:19:00.000000000'],\n", " ['2020-05-21T19:20:00.000000000', '2020-05-21T19:20:00.000000000'],\n", " ['2020-05-21T19:21:00.000000000', '2020-05-21T19:21:00.000000000'],\n", " ['2020-05-21T19:22:00.000000000', 'NaT'],\n", " [ 'NaT', '2020-05-21T19:33:00.000000000'],\n", " ['2020-05-21T19:35:00.000000000', '2020-05-21T19:35:00.000000000'],\n", " ['2020-05-21T19:36:00.000000000', '2020-05-21T19:36:00.000000000'],\n", " ['2020-05-21T19:38:00.000000000', '2020-05-21T19:38:00.000000000'],\n", " ['2020-05-21T19:39:00.000000000', '2020-05-21T19:39:00.000000000'],\n", " ['2020-05-21T19:40:00.000000000', '2020-05-21T19:40:00.000000000'],\n", " ['2020-05-21T19:41:00.000000000', '2020-05-21T19:41:00.000000000'],\n", " ['2020-05-21T19:42:00.000000000', '2020-05-21T19:42:00.000000000'],\n", " ['2020-05-21T19:44:00.000000000', '2020-05-21T19:44:00.000000000'],\n", " ['2020-05-21T19:45:00.000000000', '2020-05-21T19:45:00.000000000'],\n", " ['2020-05-21T19:46:00.000000000', '2020-05-21T19:46:00.000000000'],\n", " ['2020-05-21T19:47:00.000000000', '2020-05-21T19:47:00.000000000'],\n", " ['2020-05-21T19:48:00.000000000', '2020-05-21T19:48:00.000000000'],\n", " ['2020-05-21T19:49:00.000000000', '2020-05-21T19:49:00.000000000'],\n", " ['2020-05-21T19:50:00.000000000', '2020-05-21T19:50:00.000000000'],\n", " ['2020-05-21T19:51:00.000000000', '2020-05-21T19:51:00.000000000'],\n", " ['2020-05-21T19:52:00.000000000', 'NaT']],\n", " dtype='datetime64[ns]')" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "r = 382\n", "print('n_stops:', routes[r][1])\n", "print('route_stops:', get_stops(r))\n", "stopTimes[ routes[r][3] : routes[r+1][3] ]" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "5" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(bags_p_s)" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [], "source": [ "for i, label in enumerate(bags_p_s[2]):\n", " print('\\n'*2,'-'*10, 'OPTION', i, 'with at most 2 trips')\n", " label.print_journey()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Code for prototyping and debugging:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "iinfo(min=0, max=4294967295, dtype=uint32)" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.iinfo(np.uint32)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "ename": "RuntimeError", "evalue": "Set changed size during iteration", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mRuntimeError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0ms1\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m{\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m3\u001b[0m\u001b[0;34m}\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32mfor\u001b[0m \u001b[0mi\u001b[0m \u001b[0;32min\u001b[0m \u001b[0ms1\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 3\u001b[0m \u001b[0ms1\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0madd\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mi\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0;36m10\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0ms1\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mRuntimeError\u001b[0m: Set changed size during iteration" ] } ], "source": [ "s1 = {1,2,3}\n", "for i in s1:\n", " s1.add(i*10)\n", "s1" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([99., 80., 47., 22., 15., 11., 3., 0., 1., 1.]),\n", " array([1.0, 6.9, 12.8, 18.700000000000003, 24.6, 30.5, 36.400000000000006,\n", " 42.300000000000004, 48.2, 54.1, 60.0], dtype=object),\n", " )" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD4CAYAAAAXUaZHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAN3klEQVR4nO3df6jd9X3H8edrps5qtyaaS8gS3c0wVGTMH1ysYimd2YbVUv1DxFJGKIH8Yze7Ftq4wWT/KYxaB0MIapuBWJ3tlqCjrUstY38s7Y3aGpM6MxtrJJor03Xrxtqs7/1xvsJddqP3nO89npwPzwdczvl+vt9zvu83+fq6Xz/nfL83VYUkqS2/NOkCJEkrz3CXpAYZ7pLUIMNdkhpkuEtSg1ZNugCAtWvX1uzs7KTLkKSpsn///teramapdadFuM/OzjI/Pz/pMiRpqiR56VTrnJaRpAYZ7pLUIMNdkhr0juGe5IEkx5McWDR2bpInkrzQPa7pxpPkL5IcTvKDJJePs3hJ0tKWc+b+FeDak8Z2AHurajOwt1sG+CiwufvZDty7MmVKkobxjuFeVf8A/OtJwzcAu7rnu4AbF43/VQ38E7A6yfqVKlaStDyjzrmvq6pj3fNXgXXd8w3Ay4u2O9qNSZLeRb0/UK3BPYOHvm9wku1J5pPMLyws9C1DkrTIqOH+2lvTLd3j8W78FeD8Rdtt7Mb+n6raWVVzVTU3M7PkBVaSpBGNeoXqHmArcGf3uHvR+KeTfBX4IPBvi6ZvxmJ2x+PjfPu3deTO6ye2b0l6O+8Y7kkeAj4CrE1yFLiDQag/kmQb8BJwc7f53wHXAYeB/wQ+NYaaJUnv4B3Dvao+cYpVW5bYtoBb+xYlSerHK1QlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBhnuktQgw12SGmS4S1KDDHdJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBvcI9yR8leS7JgSQPJTkryaYk+5IcTvJwkjNXqlhJ0vKMHO5JNgB/CMxV1W8CZwC3AHcBd1fVhcAbwLaVKFSStHx9p2VWAe9Nsgo4GzgGXAM82q3fBdzYcx+SpCGtGvWFVfVKkj8Hfgz8F/AtYD/wZlWd6DY7CmxY6vVJtgPbAS644IJRy5io2R2PT2S/R+68fiL7lTQ9+kzLrAFuADYBvwacA1y73NdX1c6qmququZmZmVHLkCQtoc+0zO8AP6qqhar6OfB14GpgdTdNA7AReKVnjZKkIfUJ9x8DVyY5O0mALcBB4Engpm6brcDufiVKkoY1crhX1T4GH5w+BTzbvddO4AvAZ5McBs4D7l+BOiVJQxj5A1WAqroDuOOk4ReBK/q8rySpH69QlaQGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBhnuktQgw12SGmS4S1KDDHdJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBvUK9ySrkzya5IdJDiW5Ksm5SZ5I8kL3uGalipUkLU/fM/d7gG9U1UXAJcAhYAewt6o2A3u7ZUnSu2jkcE/yfuDDwP0AVfWzqnoTuAHY1W22C7ixb5GSpOH0OXPfBCwAX07ydJL7kpwDrKuqY902rwLrlnpxku1J5pPMLyws9ChDknSyPuG+CrgcuLeqLgN+yklTMFVVQC314qraWVVzVTU3MzPTowxJ0sn6hPtR4GhV7euWH2UQ9q8lWQ/QPR7vV6IkaVgjh3tVvQq8nOQD3dAW4CCwB9jajW0FdveqUJI0tFU9X/8HwINJzgReBD7F4BfGI0m2AS8BN/fchyRpSL3CvaqeAeaWWLWlz/tKkvrxClVJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBq2adAEa3uyOxye27yN3Xj+xfUtaPs/cJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBvUO9yRnJHk6yWPd8qYk+5IcTvJwkjP7lylJGsZKnLnfBhxatHwXcHdVXQi8AWxbgX1IkobQK9yTbASuB+7rlgNcAzzabbILuLHPPiRJw+t75v4l4PPAL7rl84A3q+pEt3wU2LDUC5NsTzKfZH5hYaFnGZKkxUYO9yQfA45X1f5RXl9VO6tqrqrmZmZmRi1DkrSEPn+s42rg40muA84CfhW4B1idZFV39r4ReKV/mZKkYYx85l5Vt1fVxqqaBW4Bvl1VnwSeBG7qNtsK7O5dpSRpKOP4nvsXgM8mOcxgDv7+MexDkvQ2VuRvqFbVd4DvdM9fBK5YifeVJI3GK1QlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBhnuktQgw12SGmS4S1KDDHdJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDVo16QI0XWZ3PD6R/R658/qJ7FeaViOfuSc5P8mTSQ4meS7Jbd34uUmeSPJC97hm5cqVJC1Hn2mZE8Dnqupi4Erg1iQXAzuAvVW1GdjbLUuS3kUjh3tVHauqp7rn/w4cAjYANwC7us12ATf2LVKSNJwV+UA1ySxwGbAPWFdVx7pVrwLrTvGa7Unmk8wvLCysRBmSpE7vcE/yPuBrwGeq6ieL11VVAbXU66pqZ1XNVdXczMxM3zIkSYv0Cvck72EQ7A9W1de74deSrO/WrweO9ytRkjSsPt+WCXA/cKiqvrho1R5ga/d8K7B79PIkSaPo8z33q4HfB55N8kw39sfAncAjSbYBLwE39ytRkjSskcO9qv4RyClWbxn1fSVJ/Xn7AUlqkOEuSQ0y3CWpQYa7JDXIu0JqKng3Smk4nrlLUoMMd0lqkOEuSQ0y3CWpQYa7JDXIcJekBhnuktQgw12SGuRFTNLbmNTFU+AFVOrHM3dJapDhLkkNMtwlqUGGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWqQ4S5JDTLcJalBhrskNchwl6QGGe6S1CDDXZIaZLhLUoP8Yx3SaWpSfyjEPxLSBs/cJalBhrskNWgs4Z7k2iTPJzmcZMc49iFJOrUVn3NPcgbwl8DvAkeB7yXZU1UHV3pfktoyyT9IPinj+oxjHGfuVwCHq+rFqvoZ8FXghjHsR5J0CuP4tswG4OVFy0eBD568UZLtwPZu8T+SPL+M914LvN67wtNHS/201Au01c9QveSuMVayMlr6tyF39ern10+1YmJfhayqncDOYV6TZL6q5sZU0ruupX5a6gXa6qelXsB+lmsc0zKvAOcvWt7YjUmS3iXjCPfvAZuTbEpyJnALsGcM+5EkncKKT8tU1Ykknwa+CZwBPFBVz63Q2w81jTMFWuqnpV6grX5a6gXsZ1lSVeN4X0nSBHmFqiQ1yHCXpAZNTbhP+y0NkjyQ5HiSA4vGzk3yRJIXusc1k6xxuZKcn+TJJAeTPJfktm586vpJclaS7yb5ftfLn3Xjm5Ls6463h7svB0yNJGckeTrJY93y1PaT5EiSZ5M8k2S+G5u6Yw0gyeokjyb5YZJDSa4aVy9TEe6LbmnwUeBi4BNJLp5sVUP7CnDtSWM7gL1VtRnY2y1PgxPA56rqYuBK4Nbu32Ma+/lv4JqqugS4FLg2yZXAXcDdVXUh8AawbYI1juI24NCi5Wnv57er6tJF3wefxmMN4B7gG1V1EXAJg3+j8fRSVaf9D3AV8M1Fy7cDt0+6rhH6mAUOLFp+HljfPV8PPD/pGkfsazeDewlNdT/A2cBTDK6ofh1Y1Y3/n+PvdP9hcG3JXuAa4DEgU97PEWDtSWNTd6wB7wd+RPdFlnH3MhVn7ix9S4MNE6plJa2rqmPd81eBdZMsZhRJZoHLgH1MaT/dFMYzwHHgCeBfgDer6kS3ybQdb18CPg/8ols+j+nup4BvJdnf3bYEpvNY2wQsAF/upszuS3IOY+plWsK9eTX4tT1V30tN8j7ga8Bnquoni9dNUz9V9T9VdSmDM94rgIsmXNLIknwMOF5V+yddywr6UFVdzmBa9tYkH168coqOtVXA5cC9VXUZ8FNOmoJZyV6mJdxbvaXBa0nWA3SPxydcz7IleQ+DYH+wqr7eDU9tPwBV9SbwJINpi9VJ3rrIb5qOt6uBjyc5wuCOrNcwmOed1n6oqle6x+PA3zD4BTyNx9pR4GhV7euWH2UQ9mPpZVrCvdVbGuwBtnbPtzKYuz7tJQlwP3Coqr64aNXU9ZNkJsnq7vl7GXx2cIhByN/UbTYVvQBU1e1VtbGqZhn8d/LtqvokU9pPknOS/Mpbz4HfAw4whcdaVb0KvJzkA93QFuAg4+pl0h8yDPFhxHXAPzOYD/2TSdczQv0PAceAnzP4Db6NwVzoXuAF4O+Bcydd5zJ7+RCD/3X8AfBM93PdNPYD/BbwdNfLAeBPu/HfAL4LHAb+GvjlSdc6Qm8fAR6b5n66ur/f/Tz31n/703isdXVfCsx3x9vfAmvG1Yu3H5CkBk3LtIwkaQiGuyQ1yHCXpAYZ7pLUIMNdkhpkuEtSgwx3SWrQ/wJ3eeD1vPXiQgAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "import matplotlib.pyplot as plt\n", "# Plot distribution of n_stops\n", "plt.hist(routes[:,1])" ] }, { "cell_type": "code", "execution_count": 95, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[0]" ] }, "execution_count": 95, "metadata": {}, "output_type": "execute_result" } ], "source": [ "list(range(0,-1,-1))" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[[], [<__main__.RouteLabel at 0x7fd99af341d0>]],\n", " [[], [<__main__.RouteLabel at 0x7fd99af341d0>]]]" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "l = [[[RouteLabel(1,1,0,1,TargetLabel(p_t, tau_0),1)] for _ in range(2)]]\n", "l.append(l[-1].copy())\n", "l[1][0].remove(l[1][0][0])\n", "l" ] }, { "cell_type": "code", "execution_count": 49, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "None\n" ] }, { "data": { "text/plain": [ "[0, 1, 3, 4, 5]" ] }, "execution_count": 49, "metadata": {}, "output_type": "execute_result" } ], "source": [ "l = list(range(6))\n", "ret = l.remove(2)\n", "print(ret)\n", "l" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Departure at 2020-05-11T08:00 from stop 0.\n", " Departure at 2020-05-11T08:10 from stop 0.\n" ] } ], "source": [ "B = [RouteLabel(1,1,0,0,TargetLabel(p_t, tau_0),0.8), RouteLabel(1,1,0,1,TargetLabel(p_t, tau_0),1)]\n", "B[0].update_stop(0)\n", "B[1].update_stop(0)\n", "for l in B:\n", " l.pprint()" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Departure at 2020-05-11T08:20 from stop 0.\n", "True\n", " Departure at 2020-05-11T08:10 from stop 0.\n", " Departure at 2020-05-11T08:20 from stop 555.\n", "----------\n", " Departure at 2020-05-11T08:20 from stop 555.\n" ] } ], "source": [ "label = RouteLabel(4,0, 2, 0, TargetLabel(p_t, tau_0), 0.9)\n", "label.update_stop(0)\n", "label.pprint()\n", "print(update_bag(B, label, 0))\n", "label.stop = 555\n", "for l in B:\n", " l.pprint()\n", "print('-'*10)\n", "label.pprint()" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[1, 2, 3], [1, 2, 666]]" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bags = [[1,2,3]]\n", "bags.append(bags[-1].copy())\n", "bags[1][2] = 666\n", "bags" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p_s = 0 # start stop = A\n", "p_t = 4 # target stop = E\n", "tau_0 = np.datetime64('2020-05-11T08:05') # departure time 08:05\n", "k_max = 10 # we set a maximum number of transports to pre-allocate memory for the numpy array tau_i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# initialization\n", "n_stops = len(stops)\n", "\n", "# earliest arrival time at each stop for each round.\n", "tau = np.full(shape=(k_max, n_stops), fill_value = np.datetime64('2100-01-01T00:00')) # 2100 instead of infinity # number of stops * max number of transports\n", "\n", "# earliest arrival time at each stop, indep. of round\n", "tau_star = np.full(shape=n_stops, fill_value = np.datetime64('2100-01-01T00:00'))\n", "\n", "marked = [p_s]\n", "q = []\n", "tau[0, p_s] = tau_0" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)[0][0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p_i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "t_r_dep = stopTimes[routes[r][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)[0][0] + \\\n", " routes[r][1]*t_r][1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 2) <\\\n", "np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 3):\n", " print(\"hello\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[routes[1][2] + np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 2)[0][0]:routes[1][2]+routes[1][1]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[routes[1][2] + np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 2)[0][0]:6]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[routes[1][2]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "routeStops[np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 2)[0][0]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if True and \\\n", " True:\n", " print(\"hello\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tau[0][0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "stopTimes[3][1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a = np.arange(1, 10)\n", "a" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a[1:10:2]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stopTimes[routes[0][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[0][2]:routes[0][2]+routes[0][1]] == 0)[0][0]:\\\n", " # end of the trips of r\n", " routes[0][3]+routes[0][0]*routes[0][1]:\\\n", " # we can jump from the number of stops in r to find the next departure of route r at p_i\n", " routes[0][1]\n", " ]\n", "# we may more simply loop through all trips, and stop as soon as the departure time is after the arrival time\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stopTimes[routes[0][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[0][2]:routes[0][2]+routes[0][1]] == 0)[0][0]][1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stopTimes[routes[1][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[1][2]:routes[1][2]+routes[1][1]] == 3)[0][0] + \\\n", " routes[1][1]*1][1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# t_r is a trip that belongs to route r. t_r can take value 0 to routes[r][0]-1\n", "t = None\n", "r = 1\n", "tau_k_1 = tau[0][0]\n", "p_i = 3\n", "\n", "t_r = 0\n", "while True:\n", " \n", " t_r_dep = stopTimes[routes[r][3]+\\\n", " # offset corresponding to stop p_i in route r\n", " np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)[0][0] + \\\n", " routes[r][1]*t_r][1]\n", " \n", " if t_r_dep > tau_k_1:\n", " # retrieving the index of the departure time of the trip in stopTimes\n", " #t = routes[r][3] + t_r * routes[r][1]\n", " t = t_r\n", " break\n", " t_r += 1\n", " # we could not hop on any trip at this stop\n", " if t_r == routes[r][0]:\n", " break\n", " \n", "print(\"done\")\n", "print(t)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r = 1\n", "t = 1\n", "p_i = 2\n", "# 1st trip of route + offset for the right trip + offset for the right stop\n", "stopTimes[routes[r][3] + t * routes[r][1] + np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "d = []\n", "not d" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "r = 1\n", "t = 0\n", "p_i = 4\n", "arr_t_p_i = stopTimes[routes[r][3] + \\\n", " t * routes[r][1] + \\\n", " np.where(routeStops[routes[r][2]:routes[r][2]+routes[r][1]] == p_i)[0][0]][0]\n", "arr_t_p_i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.datetime64('NaT') > np.datetime64('2100-01-01')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.datetime64('NaT') < np.datetime64('2100-01-01')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "jupytext": { "formats": "ipynb,md,py:percent" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 4 }