[{"content":"","permalink":"https://p2pit.com/photography/street/","summary":"","title":"Street"},{"content":"","permalink":"https://p2pit.com/photography/yellowstone/","summary":"","title":"Yellowstone"},{"content":"The Problem In a previous post I described a setup for sharing a Linux project folder with Windows using Samba, then hooking it into Dropbox via a directory symlink. The idea was that Dropbox would treat the symlink as a regular folder and sync everything to the cloud.\nIt did not work.\nDropbox on Windows refuses to follow symlinks that point to network drives. The folder showed up in Explorer with sync arrows on it, looked like it was working, but nothing ever actually synced. After trying every variation I could think of, including running Dropbox as Administrator, I gave up on that approach entirely.\nWhat I replaced it with is called Syncthing. It is simpler, does not involve Dropbox at all, and actually works.\nWhat is Syncthing Syncthing is a file synchronisation tool that works directly between your devices over the local network. There is no cloud involved. No third-party server ever touches your files. You install it on each machine, pair the devices together, choose which folders to share, and it keeps them in sync automatically.\nIt is open source, free, and runs as a background service. Once it is set up you do not need to think about it.\nThe tradeoff compared to Dropbox is that you lose cloud backup and remote access from outside your network. If that matters to you, Syncthing may not be the right choice. For keeping files in sync between two machines on the same network, it is exactly what it sounds like.\nRequirements Ubuntu 22.04 or later (Linux side) Windows 10 or later (Windows side) Both machines on the same local network sudo on Linux Part 1: Set Up Syncthing on Linux Install Syncthing sudo apt update \u0026amp;\u0026amp; sudo apt install -y syncthing Start it as a user service Running it as a user service means it starts automatically when you log in and runs without root access.\nsystemctl --user enable syncthing systemctl --user start syncthing Verify it is running:\nsystemctl --user status syncthing You should see active (running).\nOpen the web interface Syncthing includes a browser-based dashboard for managing everything. Open it at:\nhttp://localhost:8384 This is where you will configure folders and devices.\nPart 2: Set Up Syncthing on Windows Install Syncthing Open PowerShell and run:\nwinget install Syncthing.Syncthing Once it finishes, find the executable in:\n%LOCALAPPDATA%\\Microsoft\\WinGet\\Packages\\Syncthing.Syncthing_Microsoft.Winget.Source_8wekyb3d8bbwe\\ The exact folder name will include the version number. Navigate into it and find syncthing.exe.\nRun it Double-click syncthing.exe to start it. A terminal window will open and it will begin running in the background. After a few seconds, it will open the dashboard in your browser at http://127.0.0.1:8384.\nCreate a firewall rule Windows Firewall will block Syncthing from communicating with other devices unless you allow it. Run this in PowerShell as Administrator:\nNew-NetFirewallRule -DisplayName \u0026#34;Syncthing\u0026#34; -Direction Inbound -Program \u0026#34;C:\\path\\to\\syncthing.exe\u0026#34; -Action Allow Replace C:\\path\\to\\syncthing.exe with the actual path to the executable you found above.\nSet it to start automatically To have Syncthing start when Windows boots, create a shortcut to the executable and place it in your startup folder. Open Run (Win + R) and type:\nshell:startup This opens the startup folder. Copy a shortcut to syncthing.exe into it. From that point on it will start automatically every time you log in.\nPart 3: Pair the Two Devices Each Syncthing installation has a unique device ID. To let the two machines talk to each other, you need to add each device to the other\u0026rsquo;s list.\nGet the Linux device ID In the Linux Syncthing dashboard at http://localhost:8384, click Actions in the top right, then Show ID. Copy the long string of letters and numbers.\nAdd Linux to Windows In the Windows Syncthing dashboard at http://127.0.0.1:8384, click Add Remote Device. Paste the Linux device ID and give it a name (something like Linux or your machine name). Save it.\nAccept the pairing request on Linux Within a few seconds, the Linux dashboard will show a notification asking if you want to add the Windows device. Click Add Device and confirm. The two machines are now paired.\nPart 4: Share a Folder With the devices paired, you can share folders between them.\nAdd a folder on Linux In the Linux dashboard, click Add Folder. Give it a label, set the folder path (for example /home/yourusername/Projects), and click the Sharing tab. Tick the Windows device you just paired. Save.\nAccept the folder on Windows The Windows dashboard will show a notification asking if you want to accept the shared folder. Click Add and choose where you want the folder to land on your Windows machine (for example C:\\Users\\YourUsername\\Projects). Save.\nSyncthing will start syncing immediately. You can watch the progress in either dashboard.\nQuick Reference Step What to do Install on Linux sudo apt install -y syncthing Enable as service systemctl --user enable syncthing Install on Windows winget install Syncthing.Syncthing Open Linux dashboard http://localhost:8384 Open Windows dashboard http://127.0.0.1:8384 Get device ID Dashboard \u0026gt; Actions \u0026gt; Show ID Add a shared folder Dashboard \u0026gt; Add Folder \u0026gt; Sharing tab Worth Knowing Syncthing syncs in both directions by default. Changes made on either machine will propagate to the other. If you want one machine to be read-only, you can change the folder type to Receive Only in the folder settings on that device.\nIf one machine is off, Syncthing queues the changes and catches up as soon as it comes back online. Nothing is lost.\nSyncthing does not back up your files to the cloud. If you need off-site backup, you will need a separate solution for that. What Syncthing gives you is real-time sync between local machines, which is all I needed here.\nThe web dashboards are only accessible from the machine running Syncthing. They do not expose anything to the open internet.\n","permalink":"https://p2pit.com/posts/syncthing-linux-windows-file-sync/","summary":"\u003ch2 id=\"the-problem\"\u003eThe Problem\u003c/h2\u003e\n\u003cp\u003eIn a previous post I described a setup for sharing a Linux project folder with Windows using Samba, then hooking it into Dropbox via a directory symlink. The idea was that Dropbox would treat the symlink as a regular folder and sync everything to the cloud.\u003c/p\u003e\n\u003cp\u003eIt did not work.\u003c/p\u003e\n\u003cp\u003eDropbox on Windows refuses to follow symlinks that point to network drives. The folder showed up in Explorer with sync arrows on it, looked like it was working, but nothing ever actually synced. After trying every variation I could think of, including running Dropbox as Administrator, I gave up on that approach entirely.\u003c/p\u003e","title":"Why I Replaced Samba and Dropbox with Syncthing"},{"content":"The Problem I run a Linux machine as my main workstation and occasionally need to access project files from a Windows PC. I also wanted those files backed up to Dropbox without having to manually copy anything or change where my Dropbox folder lives.\nThe solution I settled on: share the folder over the local network from Linux, give the Linux machine a permanent address so the connection never breaks, map it as a drive on Windows, then trick Dropbox into backing it up by pointing it at that drive.\nFiles stay on Linux. Windows can read and write them. Dropbox backs everything up automatically without knowing anything unusual is going on.\nOne thing worth noting: Linux only exposes the one shared folder. It has no access to anything on the Windows machine.\nRequirements Ubuntu 22.04 or later Windows PC with Dropbox installed Both machines on the same network sudo on Linux, Administrator on Windows Part 1: Set Up the Shared Folder on Linux Samba is a piece of software that lets Linux speak the same file-sharing language as Windows. When it is running, Windows can browse and access folders on your Linux machine the same way it would access a shared folder on another Windows PC.\nInstall Samba Open a terminal on your Linux machine and run:\nsudo apt update \u0026amp;\u0026amp; sudo apt install -y samba Create the folder you want to share mkdir -p ~/Projects This creates a Projects folder in your home directory. This is the folder Windows will have access to.\nBack up the Samba config before editing it sudo cp /etc/samba/smb.conf /etc/samba/smb.conf.bak Always a good habit before editing system config files.\nAdd the share definition This tells Samba which folder to share and who is allowed in:\nsudo tee -a /etc/samba/smb.conf \u0026gt; /dev/null \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; [Projects] path = /home/yourusername/Projects browseable = yes read only = no guest ok = no valid users = yourusername create mask = 0664 directory mask = 0775 EOF Replace yourusername with your actual Linux username in both places.\nCheck the config for errors testparm -s If it prints your share definition without any errors you are good to move on.\nSet a Samba password Samba has its own login system separate from your Linux account. You need to create a password that Windows will use when connecting:\nsudo smbpasswd -a yourusername You will be prompted to enter and confirm a password. Write it down, you will need it shortly.\nStart Samba sudo systemctl enable smbd nmbd sudo systemctl restart smbd nmbd enable makes it start automatically every time the machine boots. restart starts it right now.\nPart 2: Give the Linux Machine a Permanent Address Every device on your network gets an IP address, which is basically its location on the network, something like 192.168.1.100. By default this address is assigned automatically and can change whenever the machine reconnects.\nThat is a problem here because the Windows drive mapping you are about to set up will use that address. If it changes, the connection breaks and you have to redo it.\nLocking it to a fixed address (called a static IP) solves that permanently.\nFind your active connection name nmcli connection show --active Look for the connection tied to your network. It will be your Wi-Fi network name or something like Wired connection 1.\nFind your current IP address and gateway ip route show You are looking for two things:\nsrc followed by a number like 192.168.1.100 - that is your current IP via followed by a number like 192.168.1.1 - that is your gateway (the router) Lock in the static IP sudo nmcli connection modify \u0026#34;YOUR_CONNECTION_NAME\u0026#34; \\ ipv4.method manual \\ ipv4.addresses 192.168.1.100/24 \\ ipv4.gateway 192.168.1.1 \\ ipv4.dns \u0026#34;8.8.8.8,8.8.4.4\u0026#34; Replace YOUR_CONNECTION_NAME with the name from the first step, and use the IP and gateway you found above.\nApply the change sudo nmcli connection up \u0026#34;YOUR_CONNECTION_NAME\u0026#34; Your IP address is now permanent and will survive reboots.\nPart 3: Map the Shared Folder as a Drive on Windows Now that Linux is sharing the folder and has a permanent address, you can connect to it from Windows.\nOpen File Explorer, right-click This PC and select Map network drive.\nDrive letter: pick any letter that is not already in use (I used J:) Folder: \\\\192.168.1.100\\Projects (use your actual Linux IP) Check Reconnect at sign-in so it reconnects automatically after a restart Click Finish. When Windows asks for credentials, enter your Linux username and the Samba password you created in Part 1.\nThe Projects folder will now show up as a drive in File Explorer, just like a USB drive or any other disk.\nPart 4: Connect It to Dropbox The goal here is to make Dropbox back up the Linux Projects folder without moving your Dropbox folder or changing how Dropbox is set up.\nThe way to do this is with a symbolic link. Think of it like a shortcut, but one that Dropbox cannot tell apart from a real folder. When you create a symbolic link inside your Dropbox folder that points to the network drive, Dropbox sees it as a regular folder and syncs it like anything else.\nWindows has a built-in command for this called mklink. It creates these shortcuts from the command line.\nThere are two types of shortcuts mklink can create: junctions and symbolic links. The difference matters here. Junctions only work with folders physically on your computer\u0026rsquo;s own drives. Since the Projects folder lives on the Linux machine and comes in through the network, junctions will fail with this error:\nLocal volumes are required to complete the operation. Symbolic links work with both local and network paths, so that is what we use.\nOpen Command Prompt as Administrator (press the Windows key, type cmd, right-click it and choose Run as administrator) and run:\nmklink /D \u0026#34;C:\\Users\\YourWindowsUsername\\Dropbox\\Personal\\Projects\u0026#34; \u0026#34;\\\\192.168.1.100\\Projects\u0026#34; Replace YourWindowsUsername with your Windows username and adjust the Dropbox path to wherever you want the folder to appear inside Dropbox.\nIf a folder already exists at that Dropbox path, delete it first. mklink cannot overwrite an existing folder.\nOnce the command runs successfully, open Dropbox in File Explorer. You will see the Projects folder there. Dropbox will treat it as a normal local folder and start syncing it to the cloud.\nQuick Reference Step What to do Install Samba sudo apt install -y samba Add share Edit /etc/samba/smb.conf Set Samba password sudo smbpasswd -a username Set static IP nmcli connection modify ... Map drive on Windows File Explorer \u0026gt; This PC \u0026gt; Map network drive Dropbox symlink mklink /D \u0026quot;Dropbox\\path\u0026quot; \u0026quot;\\\\ip\\Projects\u0026quot; Worth Knowing Dropbox only syncs when the Linux machine is on and reachable on the network. If it is off, Dropbox will pause syncing for that folder but will not lose any data. It picks back up as soon as the machine comes back online.\nIf you ever move or rename the Projects folder on Linux, the connection from Windows breaks and you will need to redo the drive mapping and symlink. Treat the folder path as permanent once everything is set up.\n","permalink":"https://p2pit.com/posts/linux-windows-samba-dropbox/","summary":"\u003ch2 id=\"the-problem\"\u003eThe Problem\u003c/h2\u003e\n\u003cp\u003eI run a Linux machine as my main workstation and occasionally need to access project files from a Windows PC. I also wanted those files backed up to Dropbox without having to manually copy anything or change where my Dropbox folder lives.\u003c/p\u003e\n\u003cp\u003eThe solution I settled on: share the folder over the local network from Linux, give the Linux machine a permanent address so the connection never breaks, map it as a drive on Windows, then trick Dropbox into backing it up by pointing it at that drive.\u003c/p\u003e","title":"How to Share Linux Project Files with Windows via Samba and Dropbox"},{"content":"The Problem I use Claude as my main AI assistant for writing, scripting, and general IT work. It is genuinely useful, but API credits are not free. When you are doing something repetitive like reviewing drafts, critiquing code, or generating multiple versions of the same thing, those credits add up fast.\nI wanted a way to keep Claude for the tasks it is best at while offloading the heavy, repetitive work to something that costs nothing to run.\nThe answer was running AI models locally on my own machine.\nWhat is Ollama Ollama is a tool that lets you download and run AI language models directly on your computer, completely offline, at no cost per use. You run it once, pull whatever models you want, and from that point on every query you send to those models is free.\nThe tradeoff is that local models are generally not as capable as Claude. They can make mistakes, miss context, or produce weaker results on complex tasks. But for structured, well-defined jobs like generating a first draft or picking apart a piece of writing, they do the job well enough.\nThe idea is not to replace Claude. It is to use local models for the grunt work and save Claude for the final judgment call.\nThe Setup I am running this on a machine with an NVIDIA RTX 3080 (16GB VRAM) and 61GB of RAM. The GPU is what makes local models fast. Without a decent GPU you can still run them on CPU, but expect slower response times.\nInstall Ollama curl -fsSL https://ollama.com/install.sh | sh Verify it installed correctly:\nollama --version Pull the models I settled on four models, each with a specific job:\nollama pull llama3.1:8b ollama pull qwen2.5-coder:14b ollama pull deepseek-r1:8b ollama pull gemma3:12b This will take a few minutes depending on your internet connection. The files range from around 5GB to 9GB each.\nWhy These Four Models Each model has a different strength, which is why the pipeline uses all four rather than just picking one.\nllama3.1:8b is Meta\u0026rsquo;s Llama 3.1 at 8 billion parameters. It is fast and solid for general tasks. It handles the first draft.\ngemma3:12b is Google\u0026rsquo;s Gemma 3 at 12 billion parameters. It approaches problems differently than Llama, which is the point. Having two models generate independent responses means you get two different perspectives on the same question.\ndeepseek-r1:8b is a reasoning model. Unlike the others, it thinks through problems step by step before answering. This makes it well suited for critiquing the other two responses. It will catch things a straightforward model skips over.\nqwen2.5-coder:14b is Alibaba\u0026rsquo;s code-specialized model at 14 billion parameters. It handles the final refinement step. Because it is the largest and most specialized model in the group, it produces the best polished output.\nHow the Pipeline Works Instead of sending a single prompt to a single model, the pipeline runs the query through all four models in a specific sequence.\nStep 1: Generate (parallel) Llama and Gemma both receive the prompt at the same time and produce independent responses. Running them in parallel saves time.\nStep 2: Critique DeepSeek-R1 receives both responses along with the original prompt. Its job is to reason through what each response got right, what it got wrong, and what an ideal combined answer would look like.\nStep 3: Refine Qwen receives everything: the original prompt, both drafts, and the critique. It produces the final response, incorporating the best parts of both drafts and addressing the issues DeepSeek flagged.\nThe result is consistently better than what any single model would produce on its own.\nThe Script Save this as pipeline.py. It requires Python 3 and Ollama running in the background.\nimport argparse import json import subprocess import sys import time from concurrent.futures import ThreadPoolExecutor MODELS = { \u0026#34;generator_a\u0026#34;: \u0026#34;llama3.1:8b\u0026#34;, \u0026#34;generator_b\u0026#34;: \u0026#34;gemma3:12b\u0026#34;, \u0026#34;critic\u0026#34;: \u0026#34;deepseek-r1:8b\u0026#34;, \u0026#34;refiner\u0026#34;: \u0026#34;qwen2.5-coder:14b\u0026#34;, } def ollama_run(model, prompt): result = subprocess.run( [\u0026#34;ollama\u0026#34;, \u0026#34;run\u0026#34;, model], input=prompt, capture_output=True, text=True, timeout=300, ) if result.returncode != 0: raise RuntimeError(f\u0026#34;{model} failed: {result.stderr}\u0026#34;) return result.stdout.strip() def run_pipeline(prompt): print(\u0026#34;\\n[1/3] Generating drafts in parallel...\u0026#34;) with ThreadPoolExecutor(max_workers=2) as executor: fa = executor.submit(ollama_run, MODELS[\u0026#34;generator_a\u0026#34;], prompt) fb = executor.submit(ollama_run, MODELS[\u0026#34;generator_b\u0026#34;], prompt) draft_a, draft_b = fa.result(), fb.result() print(\u0026#34;[2/3] Critiquing both drafts...\u0026#34;) critic_prompt = f\u0026#34;\u0026#34;\u0026#34;You are a rigorous critic. Analyze both responses to this prompt. ORIGINAL PROMPT: {prompt} RESPONSE A: {draft_a} RESPONSE B: {draft_b} Evaluate accuracy, completeness, and clarity. Summarize what the ideal response should contain.\u0026#34;\u0026#34;\u0026#34; criticism = ollama_run(MODELS[\u0026#34;critic\u0026#34;], critic_prompt) print(\u0026#34;[3/3] Refining final response...\u0026#34;) refiner_prompt = f\u0026#34;\u0026#34;\u0026#34;You are an expert synthesizer. Using the two drafts and the critique below, write the best possible final response. ORIGINAL PROMPT: {prompt} DRAFT A: {draft_a} DRAFT B: {draft_b} CRITIQUE: {criticism} Write the final improved response.\u0026#34;\u0026#34;\u0026#34; final = ollama_run(MODELS[\u0026#34;refiner\u0026#34;], refiner_prompt) print(\u0026#34;\\n--- Final Response ---\u0026#34;) print(final) if __name__ == \u0026#34;__main__\u0026#34;: parser = argparse.ArgumentParser() parser.add_argument(\u0026#34;prompt\u0026#34;, nargs=\u0026#34;?\u0026#34;) args = parser.parse_args() if not args.prompt: print(\u0026#34;Enter your prompt (Ctrl+D when done):\u0026#34;) args.prompt = sys.stdin.read().strip() run_pipeline(args.prompt) Run it python3 pipeline.py \u0026#34;explain how DNS works\u0026#34; Or for longer prompts:\npython3 pipeline.py \u0026#34;write a PowerShell script that checks disk usage on all drives and emails an alert if any drive is above 80 percent\u0026#34; When to Use This vs Claude This pipeline is not meant to replace Claude for everything. Here is how I actually split the work:\nTask Use Writing first drafts Pipeline Reviewing and critiquing documents Pipeline Code generation with multiple approaches Pipeline Complex reasoning or planning Claude Anything where accuracy really matters Claude Final polish on important work Claude The pipeline handles volume. Claude handles quality-critical work.\nWorth Knowing The pipeline takes longer than a single model query. Expect anywhere from two to four minutes per run depending on your hardware, since it is running four models in sequence with one parallel step.\nEach model file sits between 5GB and 9GB on disk, so the full set takes around 28GB of storage. Make sure you have space before pulling all four.\nThe models run entirely on your machine. Nothing is sent to an external server. For work involving internal documentation or anything sensitive, this matters.\nIf you only have a CPU and no GPU, the pipeline will still work but will be significantly slower. A single query could take 10 to 15 minutes. It is still free, just slower.\n","permalink":"https://p2pit.com/posts/claude-ollama-local-llm-pipeline/","summary":"\u003ch2 id=\"the-problem\"\u003eThe Problem\u003c/h2\u003e\n\u003cp\u003eI use Claude as my main AI assistant for writing, scripting, and general IT work. It is genuinely useful, but API credits are not free. When you are doing something repetitive like reviewing drafts, critiquing code, or generating multiple versions of the same thing, those credits add up fast.\u003c/p\u003e\n\u003cp\u003eI wanted a way to keep Claude for the tasks it is best at while offloading the heavy, repetitive work to something that costs nothing to run.\u003c/p\u003e","title":"Using Local LLMs with Ollama to Save on AI API Credits"},{"content":"Overview Tracking mailbox sizes in Office 365 is critical for managing storage and preventing disruptions.\nOur tenant has a large number of mailboxes. We needed a way to check active users and their current storage usage accurately.\nMicrosoft\u0026rsquo;s built-in tools lag by multiple days and don\u0026rsquo;t provide a real-time view of mailbox sizes.\nTo solve this, I created a PowerShell script that:\nPulls real-time mailbox size data using Microsoft\u0026rsquo;s CLI tools Targets a specific distribution list instead of scanning the entire tenant Exports results to a CSV file for tracking and analysis Ensures we work with up-to-date and accurate data This approach helps IT teams proactively manage storage and avoid mailbox overages.\nWhat This Script Does Retrieves:\nUser\u0026rsquo;s name \u0026amp; email address Mailbox size in GB Whether archive is enabled Retention policy assigned Number of mailbox items Archive size Auto-Archive status Targets:\nA specific distribution list instead of scanning all mailboxes. Outputs:\nA CSV file that can be opened in Excel for easy analysis. Requirements Before running this script, ensure you have:\nPowerShell 5.1+ Exchange Online PowerShell Module Administrator privileges A distribution list containing all users you need to check The PowerShell Script # Define the group email address $groupEmail = \u0026#34;staff@domain.com\u0026#34; # Get members of the distribution group $groupMembers = Get-DistributionGroupMember -Identity $groupEmail | Select-Object Name, DisplayName, PrimarySmtpAddress, RecipientType # Export to CSV $exportPath = \u0026#34;C:\\365_reports\\staff_\u0026#34; + (Get-Date -Format \u0026#34;yyyyMMdd\u0026#34;) + \u0026#34;.csv\u0026#34; $groupMembers | Export-Csv -Path $exportPath -NoTypeInformation Write-Output \u0026#34;Group members exported to $exportPath\u0026#34; # Target only Members of Staff + AutoExpand $staffGroupMembers = Get-DistributionGroupMember -Identity \u0026#34;staff@domain.com\u0026#34; | Where-Object { $_.RecipientType -eq \u0026#39;UserMailbox\u0026#39; } # Fetch mailbox statistics for group members only $staffGroupMembers | ForEach-Object { $mailboxDetails = Get-Mailbox -Identity $_.PrimarySmtpAddress $stats = Get-MailboxStatistics -Identity $_.PrimarySmtpAddress $archiveStats = Get-MailboxStatistics -Archive -Identity $_.PrimarySmtpAddress $totalItemSizeGB = switch ($stats.TotalItemSize.ToString().Split(\u0026#34; \u0026#34;)[1]) { \u0026#34;KB\u0026#34; { [math]::Round([double]$stats.TotalItemSize.ToString().Split(\u0026#34; \u0026#34;)[0] / 1MB, 2) } \u0026#34;MB\u0026#34; { [math]::Round([double]$stats.TotalItemSize.ToString().Split(\u0026#34; \u0026#34;)[0] / 1KB, 2) } \u0026#34;GB\u0026#34; { [math]::Round([double]$stats.TotalItemSize.ToString().Split(\u0026#34; \u0026#34;)[0], 2) } Default { [double]$stats.TotalItemSize.ToString().Split(\u0026#34; \u0026#34;)[0] } } $archiveStatus = if ($archiveStats.ArchiveQuota -ne \u0026#34;0 B\u0026#34;) {\u0026#34;Enabled\u0026#34;} else {\u0026#34;Disabled\u0026#34;} $autoArchiveEnabled = if ($mailboxDetails.AutoExpandingArchiveEnabled) {\u0026#34;Enabled\u0026#34;} else {\u0026#34;Disabled\u0026#34;} $_ | Select-Object DisplayName, PrimarySmtpAddress, @{Name=\u0026#34;RetentionPolicy\u0026#34;;Expression={$mailboxDetails.RetentionPolicy}}, @{Name=\u0026#34;TotalItemSizeGB\u0026#34;;Expression={$totalItemSizeGB}}, @{Name=\u0026#34;ItemCount\u0026#34;;Expression={$stats.ItemCount}}, @{Name=\u0026#34;ArchiveStatus\u0026#34;;Expression={$archiveStatus}}, @{Name=\u0026#34;ArchiveSize\u0026#34;;Expression={$archiveStats.TotalItemSize}}, @{Name=\u0026#34;AutoArchiveEnabled\u0026#34;;Expression={$autoArchiveEnabled}} } | Export-Csv -Path (\u0026#34;C:\\365_reports\\StaffMailboxStatistics_\u0026#34; + (Get-Date -Format \u0026#34;yyyyMMdd\u0026#34;) + \u0026#34;.csv\u0026#34;) -NoTypeInformation Script Breakdown \u0026amp; Customization Guide 1. Define the Distribution List $groupEmail = \u0026#34;staff@domain.com\u0026#34; Replace staff@domain.com with your distribution list. Ensure the list includes only active users.\n2. Get Distribution List Members $groupMembers = Get-DistributionGroupMember -Identity $groupEmail | Select-Object Name, DisplayName, PrimarySmtpAddress, RecipientType Retrieves users from the distribution list and selects name, display name, email, and recipient type.\n3. Export Members to CSV $exportPath = \u0026#34;C:\\365_reports\\staff_\u0026#34; + (Get-Date -Format \u0026#34;yyyyMMdd\u0026#34;) + \u0026#34;.csv\u0026#34; $groupMembers | Export-Csv -Path $exportPath -NoTypeInformation Saves the list to a CSV file using the current date in the filename. Change the file path if needed.\n4. Target Only User Mailboxes $staffGroupMembers = Get-DistributionGroupMember -Identity \u0026#34;staff@domain.com\u0026#34; | Where-Object { $_.RecipientType -eq \u0026#39;UserMailbox\u0026#39; } Filters for user mailboxes only — excludes shared mailboxes and groups.\n5. Get Mailbox Statistics $staffGroupMembers | ForEach-Object { $mailboxDetails = Get-Mailbox -Identity $_.PrimarySmtpAddress $stats = Get-MailboxStatistics -Identity $_.PrimarySmtpAddress $archiveStats = Get-MailboxStatistics -Archive -Identity $_.PrimarySmtpAddress Retrieves mailbox size, item count, archive status, and retention policy.\n6. Convert Mailbox Size to GB $totalItemSizeGB = switch ($stats.TotalItemSize.ToString().Split(\u0026#34; \u0026#34;)[1]) { \u0026#34;KB\u0026#34; { [math]::Round([double]$stats.TotalItemSize.ToString().Split(\u0026#34; \u0026#34;)[0] / 1MB, 2) } \u0026#34;MB\u0026#34; { [math]::Round([double]$stats.TotalItemSize.ToString().Split(\u0026#34; \u0026#34;)[0] / 1KB, 2) } \u0026#34;GB\u0026#34; { [math]::Round([double]$stats.TotalItemSize.ToString().Split(\u0026#34; \u0026#34;)[0], 2) } Default { [double]$stats.TotalItemSize.ToString().Split(\u0026#34; \u0026#34;)[0] } } Converts mailbox size to GB for consistent reporting.\n7. Check Archive \u0026amp; Retention Policies $archiveStatus = if ($archiveStats.ArchiveQuota -ne \u0026#34;0 B\u0026#34;) {\u0026#34;Enabled\u0026#34;} else {\u0026#34;Disabled\u0026#34;} $autoArchiveEnabled = if ($mailboxDetails.AutoExpandingArchiveEnabled) {\u0026#34;Enabled\u0026#34;} else {\u0026#34;Disabled\u0026#34;} Checks if archive is enabled and determines auto-archiving status.\n8. Export the Final Report $_ | Select-Object DisplayName, PrimarySmtpAddress, @{Name=\u0026#34;RetentionPolicy\u0026#34;;Expression={$mailboxDetails.RetentionPolicy}}, @{Name=\u0026#34;TotalItemSizeGB\u0026#34;;Expression={$totalItemSizeGB}}, @{Name=\u0026#34;ItemCount\u0026#34;;Expression={$stats.ItemCount}}, @{Name=\u0026#34;ArchiveStatus\u0026#34;;Expression={$archiveStatus}}, @{Name=\u0026#34;ArchiveSize\u0026#34;;Expression={$archiveStats.TotalItemSize}}, @{Name=\u0026#34;AutoArchiveEnabled\u0026#34;;Expression={$autoArchiveEnabled}} Selects all fields for the report. Modify as needed to include or exclude fields.\n9. Save the Report with a Timestamp Export-Csv -Path (\u0026#34;C:\\365_reports\\StaffMailboxStatistics_\u0026#34; + (Get-Date -Format \u0026#34;yyyyMMdd\u0026#34;) + \u0026#34;.csv\u0026#34;) -NoTypeInformation Saves the final report as a CSV with a date-stamped filename.\nSummary of Changes to Make Section Change Define Distribution List Update to your group email File Export Path Change where reports are saved Fields in Report Add or remove fields Data Filtering Adjust mailbox type filters Final Thoughts This script helps Exchange admins automate mailbox monitoring. Customize the distribution list, fields, and export path to fit your needs.\nNeed help customizing the script? Email me!\n","permalink":"https://p2pit.com/posts/check-mailbox-size-office365/","summary":"\u003ch2 id=\"overview\"\u003eOverview\u003c/h2\u003e\n\u003cp\u003eTracking \u003cstrong\u003emailbox sizes in Office 365\u003c/strong\u003e is critical for managing storage and preventing disruptions.\u003c/p\u003e\n\u003cp\u003eOur tenant has a large number of mailboxes. We needed a way to check \u003cstrong\u003eactive users\u003c/strong\u003e and their \u003cstrong\u003ecurrent storage usage\u003c/strong\u003e accurately.\u003c/p\u003e\n\u003cp\u003eMicrosoft\u0026rsquo;s built-in tools \u003cstrong\u003elag by multiple days\u003c/strong\u003e and don\u0026rsquo;t provide a real-time view of mailbox sizes.\u003c/p\u003e\n\u003cp\u003eTo solve this, I created a \u003cstrong\u003ePowerShell script\u003c/strong\u003e that:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003ePulls \u003cstrong\u003ereal-time mailbox size data\u003c/strong\u003e using Microsoft\u0026rsquo;s CLI tools\u003c/li\u003e\n\u003cli\u003eTargets a \u003cstrong\u003especific distribution list\u003c/strong\u003e instead of scanning the entire tenant\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eExports results to a CSV file\u003c/strong\u003e for tracking and analysis\u003c/li\u003e\n\u003cli\u003eEnsures we work with \u003cstrong\u003eup-to-date and accurate data\u003c/strong\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eThis approach helps IT teams \u003cstrong\u003eproactively manage storage\u003c/strong\u003e and avoid mailbox overages.\u003c/p\u003e","title":"How to Check Mailbox Sizes in Office 365 with PowerShell"},{"content":"About P2PIT P2PIT is a technical blog focused on IT infrastructure, cloud computing, and automation.\nTopics covered include:\nCloud \u0026amp; Virtualization — design, deployment, and management Networking — routing, switching, firewalls, VPNs Automation \u0026amp; DevOps — scripting, CI/CD, infrastructure as code Self-hosted \u0026amp; Homelab — running your own services Linux — administration, tools, and tips Got a question or want to suggest a topic? Use the \u0026ldquo;Request an Update\u0026rdquo; link on any post.\n","permalink":"https://p2pit.com/about/","summary":"\u003ch2 id=\"about-p2pit\"\u003eAbout P2PIT\u003c/h2\u003e\n\u003cp\u003eP2PIT is a technical blog focused on IT infrastructure, cloud computing, and automation.\u003c/p\u003e\n\u003cp\u003eTopics covered include:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eCloud \u0026amp; Virtualization\u003c/strong\u003e — design, deployment, and management\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eNetworking\u003c/strong\u003e — routing, switching, firewalls, VPNs\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eAutomation \u0026amp; DevOps\u003c/strong\u003e — scripting, CI/CD, infrastructure as code\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eSelf-hosted \u0026amp; Homelab\u003c/strong\u003e — running your own services\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eLinux\u003c/strong\u003e — administration, tools, and tips\u003c/li\u003e\n\u003c/ul\u003e\n\u003chr\u003e\n\u003cp\u003e\u003cem\u003eGot a question or want to suggest a topic? Use the \u0026ldquo;Request an Update\u0026rdquo; link on any post.\u003c/em\u003e\u003c/p\u003e","title":"About"}]