<DF>.to_sql('<table_name>', <connection>) # Also `if_exists='fail/replace/append'`.
```
* **Read\_csv() only parses dates of columns that were specified by 'parse\_dates' argument. It automatically tries to detect the format, but it can be helped with 'date\_format' or 'datefirst' arguments. Both dates and datetimes get stored as pd.Timestamp objects.**
* **Read\_csv() only parses dates of columns that were specified by 'parse\_dates' argument. It automatically tries to detect the format, but it can be helped with 'date\_format' or 'dayfirst' arguments. Both dates and datetimes get stored as pd.Timestamp objects.**
* **If there's a single invalid date then it returns the whole column as a series of strings, unlike `'<S> = pd.to_datetime(<S>, errors="coerce")'`, which uses pd.NaT.**
* **To get specific attributes from a series of Timestamps use `'<S>.dt.year/date/…'`.**
@ -2724,25 +2724,25 @@ c <span class="hljs-number">6</span> <span class="hljs-number">7</span>
</ul>
<div><h4id="dataframemultiindex">DataFrame — Multi-Index:</h4><pre><codeclass="python language-python hljs"><DF> = <DF>.xs(key, level=<int>) <spanclass="hljs-comment"># Rows with key on passed level of multi-index.</span>
<DF> = <DF>.xs(keys, level=<ints>, axis=<spanclass="hljs-number">1</span>) <spanclass="hljs-comment"># Cols that have first key on first level, etc.</span>
<DF> = <DF>.set_index(col_keys) <spanclass="hljs-comment"># Combines multiple columns into a multi-index.</span>
<DF> = <DF>.set_index(col_keys) <spanclass="hljs-comment"># Creates index from cols. Also `append=False`.</span>
<S/DF> = <DF>.stack/unstack(level=<spanclass="hljs-number">-1</span>) <spanclass="hljs-comment"># Combines col keys with row keys or vice versa.</span>
<DF> = pd.read_sql(<spanclass="hljs-string">'<table/query>'</span>, <conn>)<spanclass="hljs-comment"># Pass SQLite3/Alchemy connection (see #SQLite).</span>
</code></pre></div>
<pre><codeclass="python language-python hljs"><dict> = <DF>.to_dict(<spanclass="hljs-string">'d/l/s/…'</span>) <spanclass="hljs-comment"># Returns columns as dicts, lists or series.</span>
<str> = <DF>.to_json/csv/html/latex() <spanclass="hljs-comment"># Saves output to a file if path is passed.</span>
<DF>.to_pickle/excel(<path>) <spanclass="hljs-comment"># Run `$ pip3 install "pandas[excel]" odfpy`.</span>
<pre><codeclass="python language-python hljs"><DF>.to_json/csv/html/parquet/latex(<path>) <spanclass="hljs-comment"># Returns a string/bytes if path is omitted.</span>
<DF>.to_sql(<spanclass="hljs-string">'<table_name>'</span>, <connection>) <spanclass="hljs-comment"># Also `if_exists='fail/replace/append'`.</span>
</code></pre>
<ul>
<li><strong>Read_csv() only parses dates of columns that were specified by 'parse_dates' argument. It automatically tries to detect the format, but it can be helped with 'date_format' or 'datefirst' arguments. Both dates and datetimes get stored as pd.Timestamp objects.</strong></li>
<li><strong>Read_csv() only parses dates of columns that were specified by 'parse_dates' argument. It automatically tries to detect the format, but it can be helped with 'date_format' or 'dayfirst' arguments. Both dates and datetimes get stored as pd.Timestamp objects.</strong></li>
<li><strong>If there's a single invalid date then it returns the whole column as a series of strings, unlike <codeclass="python hljs"><spanclass="hljs-string">'<S> = pd.to_datetime(<S>, errors="coerce")'</span></code>, which uses pd.NaT.</strong></li>
<li><strong>To get specific attributes from a series of Timestamps use <codeclass="python hljs"><spanclass="hljs-string">'<S>.dt.year/date/…'</span></code>.</strong></li>
</ul>
@ -2934,7 +2934,7 @@ $ deactivate <span class="hljs-comment"># Deactivates the active